An early New Year Resolution

Previously, I had a job whose Social Media policy pretty much prevented me from talking about Security related stuff on Social Media / Blogs / Cons etc as people might believe it was pertaining to their specific environment and concerns (even if it wasn’t). So I cleared down my blog of anything technical and let this site go fallow.

I’m now very grateful I work somewhere much much cooler about such things, in fact we’re positively encouraged to blog and speak at events etc, but I’ve never really taken advantage of it, until now. So my New Year Resolution (one of several) is to blog much more.

However, one thing it’s very important to keep in mind is that whilst some writing may be based from my current role, much of it isn’t. Much of it is learnt from third parties who I’ve gone to, to understand a subject more completely, so don’t assume anything best practice or worst practice, witty anecdote or cautionary tale is actually indicative of any employer past or present.

WannaCry – When the press pick the wrong boogeyman and nobody listens

It was weird watching the events around WannaCry unfold on Friday in the context of my current job role as for the first time since, the ILOVEYOU worm (which turned 17 years old earlier this month) my role meant that I didn’t have any responsibility for any potentially impacted hosts. After a lifetime of working in small companies in either a security, IT, incident response or management, there was now a major bit of malware propagating that people were unprepared for and I now worked somewhere big enough that we had other people to manage such things.

So, rather than being in the middle of the fire-fight, I sat back, glued to twitter and watched things unfold. It was great to watch some of the worlds (or at least in the early hours, the UKs) best security researchers dig into whats happening and share their findings in real time on twitter.

It became apparent that the world’s tech journalists were also following them and a lot of non-technical ones too. But as we saw a few weeks ago when the latest ShadowBrokers cache was dropped, stuff goes from 140 chars of “current thinking” to being reported worldwide as fact, pretty quickly.

The same seems to have happened again with this, only this time the misapprehension of some early tweets is leading to the blame for the rapid propagation being laid fairly and squarely at the feet of Windows XP and other EoL software. Whilst they are undoubtedly contributing, they are far from the only culprits and come Monday morning there are going to be a lot of people who thought “We don’t have any XP machines, nothing to worry about” who’ll be facing a huge infection.

Now, XP didn’t become the boogey man by accident, there are some reasons why XP was mentioned a lot in those early tweets,

1. Unlike newer versions of Windows, at the time of the the outbreak there was no patch to prevent infection for XP (Microsoft released a patch for newer OSs in March, but has also now released an XP patch)

2. The infection requires SMB v1, a protocol that can happily be disabled in a Windows environment if you don’t have XP/Server 2003 machines on your network.

It’s also noted that NHS is notorious for running legacy software like XP.

However, what the journalists on the whole have failed to take into account is

1. Just because you can (and should) disable SMB v1, it doesn’t mean people have done. Many people won’t know to, others will have non-windows devices using SMB v1 and for some, it’s just too much of a risk to change anything unnecessarily on a production network.

2. Just because Windows OSs newer than XP/Server 2003 have had a patch available since March, it doesn’t mean people have applied it.

3. Most big corps will have perimeter firewalls that prevent direct infections, but how many people have their work laptops getting infect on public wifi this weekend, only to plug that laptop onto the corporate LAN on Monday morning.

So, this ISN’T about XP,  people could have removed their last XP machine decades ago, but unless they’re getting patch management right and are on top of their network configuration (disabling unnecessary features and segmenting as much as they practically can) then come Monday morning the could be met with a rather unexpected headache. noobs guide noob guide

Whilst I’m a “blue teamer” (I specialise in the defensive side of InfoSec), I do enjoy doing Pentesting challenges both for fun and because “know your enemy” & “always think like an attacker” are invaluable bits of advice for any defender.

One of my favourites is Labs which closely mimic real environments you may come across in actual pentests and are designed in such as way that whilst they have a gentle learning curve, they normally require you to have decent IT and security fundamentals, rather than being aimed at people who have installed Kali and are watching “How to be a hacker” YouTube videos.

However, for the last year or so I’ve been sat in the Telegram channel and whilst there are and awful lot of very knowledgeable people in the channel (far more knowledgeable than me, many are professional pentesters or prepping for things like OSCP exams) there are also a lot of people who join the channel that are really struggling with the basics. So I thought I’d put together a quick FAQ for those guys.

Do I need to use Kali Linux?

No. In fact, I intend to do the next lab entirely from a Windows Box to prove it’s possible (and because I love Powershell). However, if you’re asking that question, I strongly recommend you do as it’s probably the easiest starting platform to attack from.

Do I need to use’s downloadable Kali VM?

Again, no.  I’d wager almost everyone completing the labs is doing so using a vanilla Kali build. The only difference with the one on the website is it come pre-configured with everything you need to connect to their vpn.

If you’re going to struggle connect to a vpn from Kali when then instructions are on their website, you’d probably be better spending your time reading some linux vpn guides first.

The vpn is connected and I can ping their gateway machine(s). Now what?

Start your pentest! Normally these labs only start with one (or possibly two) gateways machines exposed, so don’t expect to be able to access the servers behind the gateway directly. However, usually these labs do have port forwarding set up for some services, so for example hitting port 25 on the gateway machine is likely to be forwarded to port 25 on the “email host” on the internal network.

You’re normally looking for some way to compromise the gateway machine (or some machine port forwarded to from it) and then pivot to the internal hosts.

I’m on the vpn, but I get disconnected constantly. Why?

Check you don’t have more than one vpn connection active to their labs as this will normally cause them to disconnect. Failing that, check out the Service channel for service outage notifications.

I’m on the vpn, but I get disconnected hourly. Why?

Many of the hosts reset on the hour. so may disconnect you and remove any changes you have made. This in intentional. If you have something running on a host that takes over an hour (i.e. some kind of brute force attack) you are probably approaching the problem in the wrong way.

I used “<insert tool name>”  and it found nothing. Why?

One of the great things about these labs is that they are often engineered to make life harder for people using automated tools (especially with the default options) and easier on those actually doing the attacks manually. So, just because sqlmap fails to find an sql injection point, a given password isn’t in the default john the ripper list, nmap doesn’t find an open port on a default scan or a folder isn’t found by dirb doesn’t mean that that approach isn’t going to work. The lab designers know these tools well too and want to give you more of a challenge than “can you run the right tool with the default options”.

How do I get admin/root?

I’m sure they’ll prove me wrong at some point, but it’s not likely you ever need admin or root on a host to get the token. This actually makes sense, as with root access you could easily screw over the challenge for other people. However traversing between users with different privs is quite common, often using techniques more commonly associated with escalating to root.

I’m stuck, now what?

Try Harder!

Seriously, that’s probably the first response you’re likely to get in the Telegram channel. Possibly with a link to this

It’s good advice. Go away, make a coffee, have a smoke, play Hello Kitty Island Adventure … whatever works for you. We’ve all got stuck on a challenge, then come back later with a fresh bunch of ideas.

But keep trying. These challenges are usually pretty logical and based on real world exploits, so take what you know about the situation and go hit the books (or google) and see if there is something more to learn.

Seriously, I’ve been trying for days, now what?

Well, the telegram channel is always there, but most of the people in it try to keep it spoiler free, so the usual etiquette is to ask for somebody to DM you about whatever you are struggling with.

Also, once the winners of a challenge are announced, people start publishing their solutions, these are great for getting you past you’re current hurdle, however, be very cautious as once you’ve cheated and taken a peek that first time, it becomes much easier to cheat every other time.

I’ve finished this lab, now what?

Try this list, or come hang out in the telegram channel and see what others are currently working on.

Obviously Disclaimer: I’m not part of the team, just a fan of their work and this does not constitute official documentation or is in anyway endorsed by them

Powershell based Plex “Local Player”

So, imagine a scenario where you’re trying to give a presentation on a customer’s PC, what you are trying to show is a video on a remote Plex server, but the customer’s PC is so locked down (whitelisted apps only) that whilst vlc will work, a web browser won’t! Seriously!!!

However, powershell did work and that gave me a way in.

So, what I thought wouldtake just a few simple lines to download the file from the plex server and play it through vlc, actually became a bit of an epic.

Therefore, in case anybody ever gets stuck in the same hole or wants some sample code demonstrating to do take “streamed” content and convert it back into something a media player will play locally. The code is now on github at

To use it just pass in the URL of the video details page in Plex, your plex username andpassword and the foldername to dump the video into (also optionally the paths to ffmpeg and vlc, or you can redefine these at the top of the script)

.\PlexLocalPlay.ps1 “!/server/01380a5c2c9b4290-9c1136b6882a65c1/details/%2Flibrary%2Fmetadata%2F12345” “” “yourplexpasswrd” “G:\Users\Glenn\Downloads”

Disclaimer: I’ve no idea if interacting with Plex is this way is against their terms and conditions. I’m also not sure any of how I’m doing it is “the right way” because it was reverse engineered by examining how the Plex Web Player works on a laptop rather than from any official documentation. I’m also not responsible for how you use it. My use case was to download marketing material that I was allowed to distribute, I imagine doing this with your family’s blu ray connection may be illegal in many places.

For anyone writing your own version of this, a few things about the design.

The convoluted background download. This is to address two problema.

  1. The Plex server seems to time out connections, even if they are happily delivering content. Their own web player gets around this by hitting a “ping” end point as a keep alive, we have to emulate that.
  2. Invoke-Webrequest is nice and simple, but it loads the entire downloaded content into memory and saves it upon completion. Fine for tiny webpages, a disaster waiting to happen for huge files. BITS would normally be my go to alternative (BITS support in powershell is great), but it needs a Content-Length header from the server, which we’re not going to get from a stream.

    So, we have to use .net functions to stream the content into a file, in a background task, so the foreground task can hand the keep alive.

    Potions of the stream downloading code are based on this blog post –

I also added some hokey support for roughly passing back the progress, but as we only know the size of the file on the remote OS and who knows what the transfer/transcode is going to do with it, it’s far from accurate. It also only updates once every 15 seconds (which is how often the keep alive is sent). Really, only consider this as an indicator something is still happening, not a real estimate of progress.

Obviously simply saving the stream to a file doesn’t generate valid video file, however ffmpeg does a brilliant job or repairing it (or has done in all my tests at least, your mileage may vary).