On Hiring and FizzBuzz

Now hiring

I read an excellent article today called, “This is why you don’t hire good developers” written by Laurie Voss, CTO of npm.

In the article, Laurie has a lot of great advice for conducting technical interviews for developers as well as some things that you shouldn’t do.

As I said, it’s an excellent read and I agree with almost every point in it, but I did take issue with one point.

People ask questions in interviews about obscure syntactical features of programming languages, or details of popular APIs. The famous fizzbuzz test simply asks “are you aware of the modulo operator?” If the answer is “no” then they are probably a pretty weak candidate, but it provides exactly 1 bit of data. Yet people will spend twenty minutes on it in an interview, a huge waste of limited time.

I happen to like the fizzbuzz test. Jeff Atwood of Coding Horror fame said the following regarding it:

I am disturbed and appalled that any so-called programmer would apply for a job without being able to write the simplest of programs. That’s a slap in the face to anyone who writes software for a living. -Jeff Atwood, Why Can’t Programmers.. Program?

See, it seems like saying fizzbuzz is a test of whether or not someone understands modulo is way off. Let’s take a look at what basic fizzbuzz is:

Write a program that prints the numbers from 1 to 100. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”. For numbers which are multiples of both three and five print “FizzBuzz”. -Imran Ghory, Using FizzBuzz to Find Developers Who Grok Coding

Does that sound like something that just tests knowledge of modulo? It is also testing knowledge of looping, conditionals, and most importantly basic problem solving.

Take a look at Trello’s fizzbuzz problem that they list on their developer careers page, it asks the applicant to take a look at a hashing algorithm and find the original string from an output hash. It is considerably more of a test of basic problem solving than it is a test of modulo.

I know that if I am looking for a developer that I want to work with, I want to make sure that she has an excellent knowledge of requirements and understands and can explain her problem solving process.

Creating and Deploying a Cloud-Hosted, Cross-Origin, Protocol Agnostic Push Server in 10 Minutes

Would you like to enable a push service? Here is a very easy way to create one. This service will allow anyone to connect from their site and enable you to send real time push notifications to them, and for them to send notifications to you, and should you choose, to other people connected to your service.

This is even easier if you don’t want to open it up for others to connect to your service and you just want to consume the service yourself, but let’s not take that easy route. We’re going to make it a little more interesting.

Just for fun, let’s also allow people to connect to your service whether they’re connected to a secure (https) site or just a regular site (http).

Step 1:
Launch Visual Studio 2013.

I know, I hadn’t mentioned the tooling we were going to use. This is going to use Visual Studio, C#, SignalR, NuGet, and Microsoft Azure. I’m sure you could accomplish the same thing with nodeJS and socket.io but I don’t know how off the top of my head, and I don’t want to cover both stacks today.

Step 2:
Start new Web Project.

Create a new Web Project

Bonus: Application Insights is right there so you’re going to get helpful telemetry data immediately, and you can access this data through the Azure management portal.

Step 3:
You want to make this an Empty project, hosted in the cloud, with no authentication. While I always recommend unit tests, none for this because we’re keeping it simple.

Project Settings

You’ll also need to complete some Azure settings to publish this in the cloud, such as where your service is going to be hosted.

Complete Azure settings

Step 4:
Right click your project and select Add SignalR Hub Class (v2).

You’re going to need to enter a hub name. For this demo, let’s call it PushNotificationHub.

This is going to generate a SignalR hub, which will handle the server aspects of the communication. We’re keeping this with the auto-generated stuff for this demo, but you will want to place methods here that do more interesting things for your clients to consume.

Step 5:
Right click your project and select Add OWIN Startup Class.

Startup makes sense for a name, so let’s use that.

This will generate a class that ties in with OWIN middleware and will automatically startup your hub when this code is running in the cloud.

Step 6:

Add the Microsoft.Owin.CORS NuGet package.

Adding CORS support.

This will enable adding CORS support to the service.

Step 7: Add this to the Configuration method of the Startup class:

app.Map("/signalr", map =>
{
  map.UseCors(CorsOptions.AllowAll);
  var hubConfiguration = new HubConfiguration();
  map.RunSignalR(hubConfiguration);
});

This maps incoming requests made to the signalr path on your site to the SignalR hub — very important. It also tells the application that CORS is enabled for all domains. This can be limited or removed based on your needs.

Step 8:
Add the Microsoft.AspNet.SignalR.Utils NuGet package.

Adding SignalR utils

This does nothing for your code, but does provide you with a tool in your packages file to manually generate the JavaScript proxy file that will be used by the clients. This is going to be needed to add protocol agnostic support.

Step 9:
Create signalr.exe.config for binding redirect for json.net since there is a reliance on json.NET 4.5.

Sadly, this tool is currently dependent on json.NET 4.5. That’s a great library, but because the official release is now at a higher version, we’ll need to add a binding redirect so that dependencies on the old version are automatically sent looking for the new version.

Create a signalr.exe.config file next to the signalr.exe file in the packages\Microsoft.AspNet.SignalR.Utils.2.1.0\tools folder.

The contents of the file should be as follows:

<?xml version="1.0"?>
<configuration>
  <runtime>
    <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
      <dependentAssembly>
        <!-- Redirect SignalR JSON -->
        <assemblyIdentity name="Newtonsoft.Json" publicKeyToken="30ad4fe6b2a6aeed" culture="neutral" />
        <bindingRedirect oldVersion="4.5.0.0-6.0.0.0" newVersion="6.0.0.0" />
      </dependentAssembly>
    </assemblyBinding>
  </runtime>
  <startup>
    <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/>
  </startup>
</configuration>

Step 10:
To create a proxy file, perform the following steps:

  1. Install the Microsoft.AspNet.SignalR.Utils NuGet package.
  2. Open a command prompt and browse to the tools folder that contains the SignalR.exe file. The tools folder is at the following location:
    [your solution folder]\packages\Microsoft.AspNet.SignalR.Utils.2.1.0\tools
  3. Enter the following command:
    signalr ghp /path:[path to the .dll that contains your Hub class]
    Tip: The path to your .dll is typically the bin folder in your project folder.
    This command creates a file named server.js in the same folder as signalr.exe.
  4. Put the server.js file in an appropriate folder in your project, rename it as appropriate for your application, and add a reference to it in place of the “signalr/hubs” reference.

Step 11:
Place server.js in a relevant folder and add it to your solution.

I prefer to have it in my scripts folder.

Added server.js to scripts

Step 12:
Update the server.js file.

Update the hub connection address with the protocol relative url of the site where the server is going to be hosted. This enables simultaneous support for HTTP and HTTPS.

Updating the hub connection address

Step 13:
Right click the solution, hit Publish and publish to Azure.

Publish to Azure

Now your service is hosted in the cloud. Was creating the service as bad as it sounded when we started?

Step 14:
Add the necessary script references to the client pages.

<script src="//superpushers.azurewebsites.net/Scripts/jquery-1.10.2.min.js"></script>
<script src="//superpushers.azurewebsites.net/Scripts/jquery.signalR-2.0.2.min.js"></script>
<script src="//superpushers.azurewebsites.net/Scripts/server.js"></script>

That’s it! The whole thing is done. The clients can now use JavaScript to consume the methods that are on your hub. Now go out there and create something!

From Waterfall to Agile….. slowly

Other the years, I’ve worked with a few different development methodologies. Previously, I was working in scrummerfall. This means that we had two week sprints, tracked story points, effort, allocation, tasked out user stories, etc. But it wasn’t always easy to correct flawed design, and integration did not happen on a regular basis. Instead we’d end up with branches where things were fixed, and then spent months waiting to be part of an official release.

That was the most painful part of the process for me. If I fix a production bug impacting even one user, I don’t want it just sitting around somewhere waiting for a release. I want it released as soon as the fix is properly verified. Every time that your users run into a known bug in production their confidence in your product and their trust in you is diminished. Telling them that you’ve fixed the bug and it will be released in two months might keep them from complaining further, but it’s hardly a consolation.

I’d rather go further towards agile. Let’s get those fixes into customers’ hands as soon as possible. I find that’s what I’m working towards now.

I’ve come into a new amazing job working with great people, and I’ve joined the organization at a great time. Things are changing and people are open to change. The development methodology was outdated, but we are rapidly iterating on it and getting better — funny how that works. The pain of not changing is what you might imagine: bugs are not fixed and released quickly, new features are monolithic projects that reside on their own branches, and months later when they are ready to be reintegrated, you better believe there’s going to be merge conflicts. Time better have been scheduled into the project just for resolving merge conflicts or that release date is going to slide into a death march.

Now, all of that is starting to change. These monolithic projects are starting to break down into smaller chunks that can be completed and integrated outside of the projects they were included in. More collaboration is being done in order to see this accomplished. Testing is being improved at a rapid pace. The environment is clearly headed in a direction where contributing is going to be a total joy.

Having branches with smaller scope doesn’t just mean that you see this code integrated into your master branch sooner. It also means that developers branching off of master are more likely to have architecture changes they should be working with instead of needing to duplicate work. Merge conflicts will occur less often, meaning less wasted developer time and more time spent solving problems and writing code. With code being integrated more often into master, you also end up with code that doesn’t sit around under-appreciated in feature branches for months and can instead make it to the customer that much faster.

Heck, my next project is even moving to sprints. I’m going to be leading sprint planning sessions as early as next week. This should increase developer collaboration and help ensure that the proper people involved are thinking about what they are going to need to do before it is too late.

But how did we get to this point?

Well, step one was identifying a set of projects that overlapped in a significant way. These are projects that if completed in a bubble would cause a large number of headaches since similar architectural changes would be needed in both projects. If these were completed without a strong story of collaboration they could have been implemented completely differently. The story of combining these projects and breaking them up into more logical chunks was suggested to the other project lead who decided that it wasn’t a bad idea. Once there was buy in on that level, providing the idea to others in the organization was easy and widely accepted. Not only did the change make logical sense, but as a collaborative decision, the leads were able to both feel like we were part of the decision and got to take ownership over the task.

Step two was identifying that everyone could better track the tasks and logical chunks (user stories?) if we had a collaborative way to organize the projects. This lead to a suggestion and acceptance of sprints and sprint planning. Again, it was a decision made as a group so that all parties feel invested in the success of this project. If it succeeds then we all succeed in helping bring a better methodology to the team. Group success is always better than solo success.

When acting as a catalyst for change, you need to identify the people that are going to back you, those who are skeptical, and those who will not want to change. The skeptics are the important group here since they are the only group that you can sway and could be your strongest supporters in the long run. How do you change the mind of skeptics? I find that the best way is to bring them into the decision. Do not attempt to force process change on anyone. If someone is part of the decision with you then they will get to share in its success and will be pushing for it to succeed. If you force a process change, then they may go along with it grudgingly and never truly enable it to succeed. Apathy is not your friend when trying to bring about change.

That’s it for now. This journey still feels like its just beginning. Any challenges encountered along the way will be shared here to help out anyone else going through the transition of waterfall to agile.

Take the blame, not the credit.

When a problem happens, I try to be the first to take the blame.
When something great gets done, I try to give credit to others.

Handshake

Everyone seems to want the recognition and respect that comes with taking the credit for good work. I’ve found that I’d much rather give that credit to others who were involved.

What do you get when you take the credit? Respect, thanks, and appreciation. This makes you feel great and can be addictive, but it’s also fleeting. If you’re not able to show a pattern of success then the thought may become, “What have you done for me lately?”

But what happens when you give the credit to others? You foster respect between yourself and your coworkers. People would much rather work with someone who shares the glory instead of hogging it for themselves. You should take every opportunity you can to foster respect amongst the team and increase team morale and collaboration. This goal only increases as your authority increases.

The bonds that you form by sharing the credit with your coworkers will pay off long term, and long term benefits should be much more desirable than short term gain. This is assuming that you have a pattern of success, if instead you have a pattern of failure you should probably look at getting some of that short term gain since you may have some issues with your reputation.

This doesn’t mean that you shouldn’t ever take credit for your work. When someone steps in and passes the credit back to you, be thankful and accept it. It can look bad on you if you continue to pass the buck. As well, if you are working solo on a project — truly solo — accept the credit, since there’s no one with whom to reasonably share it.

On the other hand, taking the blame is not as bad as it sounds on paper.

Part of the issue is that the word “blame” has a negative connotation. You hear it and imagine that everyone believes that all the ills of the world are your fault, regardless of the root cause of the issue. The reality is that the blame is not important. What is actually important is that the issues get fixed. A lot of time time is wasted by people playing the blame game and the people that suffer are our customers. Instead, take responsibility for issues that arise.

Taking the responsibility does not mean that you are awful at what you do, instead it means that you will take charge of the situation and coordinate its resolution. You should be pumped about this. Taking responsibility fosters respect in the same way as giving away the credit. Many people avoid taking responsibility for their actions and their resolution. It is such a breath of fresh air when it happens that people will sit up and take notice, especially when it becomes a pattern of behavior.

People are likely to notice a pattern when you are involved with many successful projects and bug fixes. Humans are excellent at pattern recognition and will be able to put two and two together without your help.

Next time, think twice before you jump up to take credit for success, or want to get involved in the blame game.

How I learned to love separation (CQS / CQRS)

I’ve mentioned it before: I’m lazy. When it comes to writing code, I’d like to read as little code as possible. I don’t want to have to read the code for every class I want to use. I don’t want to have to investigate side effects for every method I want to call. All of that is additional work when it comes to writing code — and none of it is writing code.

Why do we generally only apply this rigorous look to code within our own code bases instead of third party libraries? It’s because so many of us are well aware of the issues and code smells existing within our code bases but aren’t familiar with the smells in the third party code that we rely on every day. How did this trust issue come about? When did we stop trusting our own code?

Gotta have trust

As with most technical debt, it likely happened a little bit over time. Someone was running out of time and hacked together a quick fix that added an unexpected side effect to a method. Then the same thing happened again and again until we could no longer be sure what our code was going to do once it compiled. This code isn’t generally written with the idea that it’s going to be permanent debt. It’s just temporary! We’ll come back and fix that the right way a little later! Right? Wrong. Generally, that debt is going to exist in our code base for a long time.

Broken windows are just the start of serious degradation

That small amount of debt will act like a broken window. Technical debt will cling to it as people work around it instead of fixing it. Over time, trust in our code base is eroded, and development velocity slows to a crawl. We are all to blame.

But the good news is, this does not need to be a permanent condition! We can rebuild it. We have the technology. Where should we start? I recommend updating the code so that it behaves in the manner you expect. Every developer will implicitly read intent out of every line of code you write. You used FirstOrDefault() instead of SingleOrDefault()? Someone is going to think that multiple values are expected and not cared about. Developers don’t want to have to think that hard when they are writing code. We all want things to behave as expected, it’s just good design philosophy.

Let’s move towards command-query separation.

Sometimes separation can be a good thing

“Every method should either be a command that performs an action, or a query that returns data to the caller, but not both. In other words, asking a question should not change the answer. More formally, methods should return a value only if they are referentially transparent and hence possess no side effects.” -Wikipedia, Command-query separation

When code that performs an action or side effect doesn’t return anything, we can infer from the method signature that a side effect is occurring. Otherwise there wouldn’t be any reason to call a method that has no return value. The intent of the contract is implicit. These are command methods.

Query methods have a return value, and no side effects. You can invoke them again and again and the state of the application never changes. The method signature has a return value, and the user should be able to infer that no side effects are occurring and they have no need to worry about what they might be doing by invoking this method.

“Functions should either do something or answer something, but not both.” -Robert C. Martin, Clean Code

Based on these definitions, we can come to the conclusion that it is okay for a command method to invoke query methods. The inverse is not true. It is not okay for query methods to invoke command methods, since that would be introducing side effects into your query.

This has the positive impact of allowing you to minimize the time you’re spending reading code and maximize the time you spending writing code.

Let’s take a look at some examples of using command-query separation:

public class PhoneBook
{
  public class PhoneBookEntry
  {
    public string Name { get; private set; }
    public string PhoneNumber { get; private set; }

    public PhoneBookEntry(string name, string number)
    {
      Name = name;
      PhoneNumber = number;
    }
  }

  public Dictionary<string, PhoneBookEntry> Entries { get; private set; }

  public PhoneBook()
  {
    Entries = new Dictionary<string, PhoneBookEntry>();
  }

  public PhoneBookEntry Add(string name, string number)
  {
    var entry = new PhoneBookEntry(name, number);
    Entries.Add(name, entry);
    return entry;
  }
}

This breaks command-query separation. Add() is both adding a new entry, and returning that entry. This is how that should be refactored:

public class PhoneBook
{
  public class PhoneBookEntry
  {
    public string Name { get; private set; }
    public string PhoneNumber { get; private set; }

    public PhoneBookEntry(string name, string number)
    {
      Name = name;
      PhoneNumber = number;
    }
  }

  public Dictionary<string, PhoneBookEntry> Entries { get; private set; }

  public PhoneBook()
  {
    Entries = new Dictionary<string, PhoneBookEntry>();
  }

  public void Add(string name, string number)
  {
    var entry = new PhoneBookEntry(name, number);
    Entries.Add(name, entry);
  }

  public PhoneBookEntry Get(string name)
  {
    return Entries[name];
  }
}

I have removed the return from Add() and added a Get() that grabs the entry requested and does nothing else. (Keep in mind that this whole example is over-engineered to being with, but it’s just an example.)

What other benefits do you get with command-query separation?

Developer velocity is not the only benefit from this. It is also easier to write tests when code is written this way. No longer do your tests for a method need to be worried about both command and query. Instead, you can just test a single thing and end up with more flexible and readable tests.

Overall, you should see an increase in developer happiness and that excites me. I hope to one day work in a utopia where none of the developers are complaining that the existing code base has significant technical debt (which is every mature code base I’ve ever worked in) and instead can talk not only about specifics, but what they are going to do to resolve those issues.

“The other main benefit is in handling high performance applications. CQRS allows you to separate the load from reads and writes allowing you to scale each independently. If your application sees a big disparity between reads and writes this is very handy. Even without that, you can apply different optimization strategies to the two sides. An example of this is using different database access techniques for read and update.” -Martin Fowler, CQRS

Why I Love Cloud Application Platforms

I love the idea of cloud application platforms. Wikipedia defines a platform as a service as the following:

“The consumer creates an application or service using tools and/or libraries from the provider. The consumer also controls software deployment and configuration settings. The provider provides the networks, servers, storage, and other services that are required to host the consumer’s application.”

This is great for me. I can be pretty lazy and network and server management is usually more responsibility than I want and more control than I need. It makes perfect sense to give this work to someone else.

I also find that when I’m deploying applications this way, I significantly speed up development time. Rather than having to set up, configure, and license a server, I can simply run a command to publish my server and the code is published in the cloud automatically.

Those are the kinds of benefits that I really appreciate as a back end developer.

system-71228_640

Visual Studio 2013 is currently my usual IDE since I work in C# and ASP.NET at work, and the work Microsoft has done to tie Azure into the IDE has really paid off for me. If I want to publish a project to Azure, I can just right click the solution and publish to the cloud. A minute later — if that — I can access my project online.

With Heroku the work flow is even easier. In order to deploy to Heroku all you need to do is push a git repository to them. They will take the code and automatically deploy it into the cloud. Like Azure, a minute later you can access your code online. Heroku doesn’t natively support .NET, but it’s very easy to get a Ruby or node.js server running.

A few weeks ago, I was involved in a hackathon at work. The goal was to develop new widgets for our platform and see what wee could come up with in a day. You got to wire up a combination of HTML, CSS, and JavaScript and see what you could do. I’m mainly a platform developer — I primarily work in designing web servers as opposed to front ends. I can write JavaScript and HTML, but CSS is not easy for me and visual design takes longer than a day. It seemed to me that the only shot I had was to leverage my back end expertise.

Configuring and launching a new web server would have eaten a significantly larger portion of my day than I wanted to schedule. But, I was easily able to publish a server running a SignalR hub at the push of a button. I then used AngularJS to wire up my JavaScript to the front end and make AJAX requests to my new server.

I believe I was the only one in this hackathon to create a separate server to provide additional cross-user functionality. There were some remarkable projects that people created, but I felt like a had a real advantage in this competition.

It’s not all roses when it comes to cloud application platforms. It’s been my experience that your deployment starts out fairly cheap, but as your application grows the costs can ramp up significantly and more quickly than you expected.

This means that often your best bet for these services is MVPs (minimum viable products), prototypes, and early stage web and mobile apps. As your product grows, you can often see significant savings by managing your own servers and networks.

Personally, I love these servers and I’m going to continue to leverage them in the beginning stage of every project I’m involved with. If you decide to do the same, I think you’ll see that you’re able to get your projects deployed faster and let you take some of the cognitive load off your plate.

My transition from Subversion (SVN) to Git

My first source control system was SVN with TortoiseSVN. I’ve used this for years.

I started using Git about 5 months ago.

I’d dabbled with it previously, using it to push to Heroku or clone some repositories from GitHub, but nothing major. Then I started working at Igloo Software (we make a cloud based Intranet platform that you’ll actually like) and they had recently made the switched from SVN to Git. My previous job had me using SVN with TortoiseSVN, so of course the first thing I did was boot up TortoiseGit.

TortoiseGit is great at allowing you to work with Git while still pretending it’s SVN. Of course after a month I started to get concerned that I really didn’t understand what Git was doing and how it worked. This was frustrating when I’d find myself trying to accomplish different things and the results were not what I expected. In fact, more than once I completely blew away my local repository and cloned it again from scratch figuring that a fresh clone was really the best way to be comfortable with my source control.

Yes. I really was that clueless.

The smart people I work with advised me that TortoiseGit seemed to be pretty bad for working in Git, and suggested that I use SourceTree, which is a much better graphical tool for abstracting away Git’s mysterious inner workings. I did take a look at it, watched people do some work with it, and even attended a lunch and learn explaining it. In the end, I decided to get rid of the abstraction entirely and move straight to the command line.

This post is not really intended to talk about how Git works. There’s a lot of other material out there covering that topic. The first thing I did was read Pro Git, written by Scott Chacon and published by Apress. (The book is available in full online, and dead tree copies can be purchased from Amazon.) Well, I did read the first four chapters of the book. But honestly, those four chapters were incredibly and really helped me gain my footing. Suddenly I found myself being transformed from clueless newbie to someone in the office who could actually provide solid information and answers regarding the workings of Git. (That knowledge base is increasing all the time as I continue to learn, read, and watch various videos on Git.)

So what tripped me up about this transition?

SVN is a centralized source control system, while Git is decentralized. I’d heard this many times, but never really understood what it meant.

When I was working with SVN, whenever I’d commit I’d have to deal with potential conflicts and everyone would need to refresh their branches in order to receive my changes. I had gotten into the habit of notifying my team when I’d push a commit that I thought might have conflicts with their work, or that had code they needed to continue their own work. This is not needed in Git.

It’s not needed after a commit anyways. Git is completely decentralized. When you clone a repository, you have the whole thing. You have every version of every file that has existed in the repository. Local. On your device. When you commit, you are only committing your changes to a branch that exists locally. Others can only receive your changes when two things happen: you push your changes back to origin (origin being the distributed system where you are storing a copy of the repository), and they pull the changes from origin to their local repository.

This means that unlike SVN, if the server is down you can continue to commit, you just can’t push those changes to your distributed system. You can still provide those commits to your coworkers by sharing through your machine or a different distributed machine. The flowing nature of Git makes it extremely fault tolerant.

Because your repository lives locally, you can also create and destroy new branches easily. These branches never have to live on the server. They can exist only locally and no one else will ever have access to them unless you decide to push them to origin, of course you can always merge these changes back into branches that are linked with branches that exist on origin without needing to share your local branches. I found that this made it extremely easy to create a new branch to test out refactoring or look at how new features would function with minor changes without ever needing to pollute another branch — and I could still commit all my changes because everything occurred locally.

Another thing I didn’t understand were the differences between staging, working copy, and committed code. When you are working in SVN and you have some code ready to commit, you can decide which files to commit and which not to commit. In Git, you can do this on an ongoing basis without needing to decide when it’s commit time. When you commit in Git, by default only files in your staging area are committed and you can easily control which files are added to your staging area (or you can add all changed files in one go if you so choose). At the same time, you can continue to make changes to files already in staging without needing to commit the changes to the file since they were changed. At any point you can discard those changes or add them to staging as well without ever needing to commit them. Once they are committed, they are going to be added to your local repository and you’ll need to checkout old versions in order to revert that code.

Most of the problems that I encountered between SVN and Git were terminology differences and the basic differences between centralized and decentralized systems. I found that once the terminology was cleared up, I was able to find answers to issues that I was having and pass along those answers to coworkers. Pro Git was the most invaluable resource for me during this time.

I do recommend making the switch to Git. There are a lot of advantages to it, but there is likely going to be a learning curve as well. You’ll likely have an easier go of it if there’s someone on your team working with Git regularly already who can help the team transition and understand what is happening. And if you want to really understand how to work with Git, and not simply try to pretend Git is SVN, I’d recommend staying away from TortoiseGit for the time being. It seems like it’s more likely to hinder your learning than it is to foster it.