Bidtellect’s Information Technology team recently took on a huge endeavor to revamp and upgrade Bidtellect’s server environment, a process that has been long overdue. Not only will it help them to keep with the ever-changing times concerning technology, but it will also allow them to better streamline their processes, provided that they have used something like these cable wrap around labels to help others to know what each cable does and whether it should be touched or not. Once done, the server environment will see a significant improvement. The undertaking took 32 days of nonstop working – “Work eat sleep repeat,” said Jason Taylor, Director of Information Technology – with some sleepless nights and 40-hour sprints in which the team would take turns napping. CTO Mike Conway explained that the impressive endeavor was akin to changing four tires on a car while speeding 80mph on the highway – and not crashing.
Bidtellect partnered with Cisco and R2 Unified Technologies to deploy wide-reaching upgrades including a modular Network and Server Fabric Platform with extreme throughput, compute, and storage density all while maintaining efficiency. The significant investment and upgrade include large scale additions in computational capacity and data storage. Server and network infrastructure problems may be the reasons for slow internet connection, inefficient data processing, and high electricity bills. Hopefully, the proper upgradation of infrastructure can now make it possible for Bidtellect to listen to 3.5x the number of queries and process 3x the number of auctions per second, laying the foundation for the current growth trajectory.
Thanks to all the increased processing and storage in the new facility, the servers are very likely to heat up pretty soon which means we’ll have to think about investing in newer HVAC cooling systems too. The idea is to get the current system checked out by the likes of Total System Services and make any upgrades as necessary. Got to keep the place cool if we’re going to handle all those queries coming through!
According to JT, the team essentially made the capacity available to increase supply and demand and capture revenue, without increasing our monthly costs or EBITDA negatively. If Bidtellect were a shopping mall, we expanded the mall to make room for more shops and more shoppers to buy!

This building is surely more than enough to handle at least a few more years of growing demand. In case, there is a need to completely switch to a new facility, maybe the Bidtellect professionals would consult Walt Coulston or someone of a similar caliber who could help them move to just the right infrastructure space.

Check out our Official Press Release here: Bidtellect’s Answer for Growing Advertiser Demand is Major Infrastructure Environment Overhaul

Check out JT’s breakdown below:

What did we do?

Replaced and upgraded a majority of the environment!

    • All UCS Errthang via Cisco Fabric Interconnects – Extremely fast networking and centralized server management
    • New Native Blades – Faster CPU’s, increase to 96GB of RAM, ~ 10-15% capacity increase
    • Tracking – Migrated DC3 tracking to repurposed VM blades, 24C/48Tx 4
    • Kafka, Aerospike – Replaced, some performance increase
    • Namenodes – Upgraded CPU/RAM, all SSD for improved indexing speed
    • Datanodes – 3x storage increase, 2x RAM and CPU performance increase
    • Firewalls – Upgraded 3x cryptography throughput, redundancy in DC3
    • Core Network – New ASR’s and Nexus in DC3 to support more throughput and port density
    • Load Balancers – 3-3.5x raw QPS capacity increase, 5-6x SSL offload capacity increase
    • VM Platform – Increase VM platform capacity, more cores, more RAM
    • Modular scalability

Why did we do it?

  • Replace aging and end of life equipment.
  • We were at capacity limitations of the platform with no path for growth.
  • Able to deliver more revenue without negatively affecting profitability. In fact, this helps profitability.
  • First step of a truly scalable Infrastructure environment that will allow us to continue increasing our Demand and Supply diversity along with capturing more revenue.

Who was involved?

  • Helgi – Never stopped working
  • JT – Never stopped complaining
  • Chris – Never stopped eating
  • Mike – Never stopped asking if it was done yet
  • Rob – Always ready and willing
  • Luke – The secret weapon
  • Roei – Silent but deadly
  • Karl – Morale support

How much did it cost?

  • Little to no increase in MRC costs by replacing Winmark and seeking our own financing
  • Opex NRC cost for temp SE, travel, lodging, car, food and misc datacenter needs
  • Why is this important?
    • These costs directly affect profitability, EBITDA, which directly affects EBITDA and Bonus
  • Allows us to capture more revenue without increasing Infrastructure Costs
  • A little piece of my soul

Fun Facts

  • Our power usage is equal to approximately 12-15 single family homes in the peak of a FL summer, yet our floor space is less than 350 sq ft combined.
  • We could hypothetically build approximately 1,796 home computers if we divided up the compute resources in Ashburn and San Jose.
  • Helgi and I went to Target on Easter Sunday and didn’t understand why they were closed. We also didn’t realize it was Sunday, let alone Easter Sunday. The concept of “time of day” or “day of the week” was already lost at this point. But we were tired of doing laundry and were just going to buy new clothes.
  • In the last hotel we stayed at, Helgi and I had to fix the hotels internet so that we could work. We got on the phone with their ISP’s support and worked through the issue. The hotel still didn’t give us free breakfast.
  • The Load Balancers we deploy in DC3 are the same devices that are used as an internet gateway for entire country of Jamaica.

Exploring what may be coming next…

  • MORE QPS & MORE $$$
  • More Native and Tracking resources
  • Upgrade AD-IX port and enable public peering
  • Additional uplinks to service providers and enable Multipath BGP(capacity level redundancy)
  • Load Balancing the Load Balancers…..wait what?
  • Infrastructure Patch Management, a step towards patch automation
  • Windows 2016/2019? .Net 5.0? (test in EU first)
  • EU going physical, in a good way
  • APAC footprint
  • Dedicated Stage Environment
  • Deploy an ACI environment (10x scale plan)

Stay tuned for an official Press Release!