hckrnws
> Rely on trust and feedback, not process
Process is important when work is handed off from one team to another team. Any company with a non-trivial product will have a non-trivial team size; and thus it'll need to standardize how people hand off work.
It doesn't have to be onerous: The best processes are simply establishing boundaries so lazy people don't throw their work over to the next person. (IE, "I won't work on a bug without steps to reproduce and an unedited tracelog that captures the failure" is a good boundary.)
Wow. 900 applications down to 10 "SuperDay" participants down to 4 hires. All to work at.... posthog. What a depressing statistic.
This felt like a humble brag to help make their point about hiring good talent and how many people want to be a hogger (or whatever they call people that work there) but this just really highlights how brutal the job market is. Yes the market is also flooded with unqualified applicants and or bots that will apply to any job listing thats posted, but still this is ridiculous.
I really feel bad for the 6 people who had to endure the technical interview AND THEN were given the honor of attending the "SuperDay" which sounds like a full day of at least 5 interviews, 2 - 3 being technical, and still got rejected. Not sure what the technical interview is like at posthog, but assuming this is just an hour phone screen those 6 people still probably had more than 7 hours devoted just to interviewing at this place just to get rejected. That's not including any time spent preparing for interviews or anything else either.
There must be a better way to do interviews. Posthog is not Google, Posthog (or any other startup) does not need to hire to the same standard that Google does.
Let me know when you're on par with Google in terms of revenue or benefits or prestige, or anything else really that Google offers then sure I will jump through as many hoops as you want for the interview. Until then, hard pass.
As someone who considered applying to PostHog but never to Google (even though Google recruiters reached out to me, while PostHog’s did not), I can explain why they attract applicants.
First, in several countries, working at Google won’t make you rich—they don’t always offer the highest salaries in the region. You’ll have a comfortable life, but you’ll still need to work for the rest of your career. Second, Google is not a remote-first company, which is a dealbreaker for some.
My (perhaps flawed) reasoning was that, in its early days, PostHog was a very small company with a great product that people genuinely enjoyed using. If you received stock options, the potential for a big financial upside seemed high. Plus, working at a small company is simply more exciting—your contributions actually make a difference.
Does Google not offer RSU’s to employees outside of the US? You won’t ever get rich off of salary alone, salary and RSU’s of a growing publicly traded company is a different story though.
As you alluded to, it’s very rare for even founders to make a life changing amount of money from a company they start, it’s exceedingly rare for early employees to have this happen and should not be a reason you consider working at a small company.
The right reasons to work at a small company are the other ones you mentioned: high impact, like working in small teams, interesting work, cool product, etc… but my point is that the interview process for the small company and the big company are often times very similar even though the amount of risk, scale, future career opportunities, and potential financial gain are worlds apart from each other, which isn’t right.
The level of effort I should have to put into an interview should be proportional to what I stand to gain by getting the job. This is kind of already how it works naturally because more desirable jobs have more applicants which makes it more competitive and requires more preparation. I stand to gain much more working at Google than I do at posthog, so why am I spending around the same amount of time interviewing at each place? Is working on a smaller team and having more impact on a smaller product worth it to me to do that? Personally that answer is no which is why I don’t understand the interview similarities (mainly time spent interviewing and acceptance rate in this case).
In Brazil, it seems to offer, but it amounts to a very good compensation, not a retirement-in-10-years compensation.
Levels.fyi puts the total compensation of a L5 at ~$140k/year
https://www.levels.fyi/companies/google/salaries/software-en...
PostHogs seed round was 3 million USD, but the likelihood that Google at this point of time is going to lets say 20x in a decade is vanishingly small. And remember, even in the early days customers loved them, so certain risks where lowered.
Remote-first also changes the dynamic of who gets promoted.
I mean each to his own, but personally I would rather bet big iff I wanted to quit my current job. I think now the PostHog sail has long gone with the risk and reward ratio.
Sure as long as you realize you are really betting big, if you’re lucky and after dilution you own 0.1% of the company, posthog needs to sell to someone for a billion dollars for you to make one million.
And that’s one million before tax, before the preferred stock gets paid out to the big investors, after the lock out period where you can sell your stock a few months after the deal goes through. That’s not 1B valuation either, that’s someone buying the company for 1B in cash. Not impossible, but definitely unlikely.
If you work at google for 5 years you will almost definitely make more than you would working at posthog and getting acquired in the same amount of time, but yes if lighting strikes twice in the same place and posthog did an IPO and the stock 20xed you would miss out on that money
The idea that engineers at Google don't get rich is not based on reality.
Having attended a SuperDay, I can hands down state that their interview process is the best I've ever had (didn't get the job tho, which was probably for the best at this phase of life). Designed to perfectly lift signal and minimize noise, for what they're trying to achieve. Don't change a thing PostHog.
I personally think there are more efficient ways to get a high signal to noise ratio on if you are going to be a good hire or not without having the candidate invest almost 9 hours into an interview process, but that’s just me
A 9 hour investment to make a decision that will strongly affect 50% of your waking life for years or decades doesn't seem like a big ask.
What happened at the SuperDay?
I had a superday, which to their credit was paid. It was for a technical product role which they wanted to hire people with some baseline technical ability, but it wasn't a coding job. I was up front that while I can code, and do build my own projects, I'm not a app developer. The superday was an app development exercise, and they let me know I didn't pass because my app development skills were not up to snuff. Not really sure how or why that played out that way, but at least I was compensated.
>There must be a better way to do interviews.
Interviews are a game of asymmetric information. The job seeker has much more knowledge of what they can and cannot effectively do than the job offerer. And the job offerer has much more knowledge of what is and is not required for true success than the job seeker.
Given that, no, there really doesn't have to be a better way than just "interview a lot of people and take your best guess". If you stop taking the time to do that you will eventually be outmaneuvered by someone who does.
Sure but the burden is on the company to understand what skills they need to hire for well enough to hire for the role, not on the candidate to just prepare for everything and roll the dice in a 9 hour interview
This is where interviews can and should be done differently. In my career some questions I’ve been asked in interviews are: serialize and deserialize a binary tree, create an in memory cache from scratch, design an elevator system for a building, sequence DNA strands together using dynamic programming, build a flight control system for an airport, recreate atoi function, etc…
Sure enough, none of these interview questions had pretty much anything in common with what work I would end up doing at the company, so this was an inefficient way to hire that wasted a lot of my time.
This would be like trying to find a plumber to fix my sink by having them come over, showing them the sink, then sitting them down to grill them on the theory behind some thermodynamics, Bernoulli’s principle, maybe throw in some design questions about how to redo my sink. This is surely how you find the best plumber because only the best will take the time to really understand what they are doing when they fix a sink right?
Like it or not the vast majority of work in the software industry is e-plumbing where you fix sinks and connect pipes together to start the flow of CRUD from one end to the other, which is why our way of interviewing people is insane.
As an exercise for the reader, see if you can figure out which interview questions I listed above were asked to work at a FFANG company vs small startup companies that are all bankrupt now. Pretty hard isn’t it?
Your premise that it is a hard task (that I agree with) doesn’t lead to the conclusion that stopping doing how it is done now will be harmful to the company. It just might be the opposite.
Also, the GP seems to wonder about better ways to do interviews, not stop doing those entirely
Superday is a paid day of work with a 30-minute talk with a founder + a 30min review about the day with an engineer
Ah ok my mistake, so that’s 8 hours including the review and discussion portion for the super day, then let’s say 45 minutes for the technical interview so 8 hours and 45 minutes of time spent interviewing at a minimum.
> Wow. 900 applications down to 10 "SuperDay" participants down to 4 hires.
You’re almost 10 times more likely to be accepted to Stanford’s undergraduate program than to ever work as a hogger
> You’re almost 10 times more likely to be accepted to Stanford’s undergraduate program than to ever work as a hogger
You have to pay for Stanford whereas Hog pays you. Wrong direction?
Sheep mentality hard at work at companies. Just because Google does it (processes, technologies, systems etc), lets also adopt it without thinking whether its relevant in our context and use-cases. I bet the same devs from these firms who are asking to traverse a minimum spanning tree would fumble at even the slightest variation of the problem appearing in daily life.
A general rant.
>> Wow. 900 applications down to 10 "SuperDay" participants down to 4 hires. All to work at.... posthog. What a depressing statistic.
Yeah, at first I thought it was some kind of parody, then I realized it's a serious article and was astonished.
> “If you aren't excited about what you're working on, pivot. It's as simple as that. You'll achieve more if you're working on something that feels yours.”
I doubt the rank and file ICs feel this way at all. It's analytics plumbing, and it's all for the sake of the paycheck.
Ya I have yet to meet anyone who is passionate about analytics plumbing surprisingly, I’m glad posthog has found the 4 people in the world who truly are.
What this really translates to is the founders saying “we think posthog is our golden ticket to becoming rich in an exit event someday, so don’t mess it up for us”. It’s just not politically correct to say that, so it’s expressed as being “passionate about the problems the company solves” or “working on something that feels yours”.
And if you’re not someone who wants to dance and clap along with the founders as they sing “I’ve got a golden ticket!” on the way to the chocolate factory, only to be left standing behind the gate as they enter, then ya go ahead and pivot because you’re killing the vibe here…
If you’re very product or customer focused, you can be passionate about anything.
When I’ve worked at shops that made products I, personally, didn’t care about, it was always satisfying to see a customer be excited about a feature or be thankful for the tool I’m building.
The first time that happened was when an admin thanked me in a support ticket for speeding up the generation of expense report spreadsheets way back.
I've had finance thank me for writing a perl script that reduced the time they spent on daily (yes, daily) work from 2 hours to 2 minutes (ok, maybe 5 minutes).
Never underestimate the benefit of having the finance department on the side of IT's innovation.
Have you bothered to spend any time looking at their entirely open source product, docs, handbook, etc? There are many different things going on here. A gross and frankly ignorant simplification, considering what they've built.
No I didn’t bother to read their documentation after seeing their website was apparently made on Myspace.
As someone who has gone through the roughly 9 hour interview process in the past, was it the docs and open source product that made you want to work there?
At my current job, we use some variety of each tool that PostHog has built. We have analytics, feature flags, session replay, surveys, error logging and more. We spend an astronomical amount of money for these services, and where was that login again? Everything about managing (and utilizing) these subscriptions is inefficient, and coordinating all of these different views into our data is a terrible chore.
As an engineer wanting to build a successful product, I hate the fact that this is how it is. And then there's PostHog, where each of these tools is right there, connected to one another, ready to make my job (and my company's success) that much easier. Being able to work on something that simplifies all of this for others is very enticing.
Combine that with their open-company ethos (check out their handbook), and high-trust/high-performance product-engineer mindset, and yah... sign me up. This is a company that legitimately makes other people's lives easier, and thus makes for better products. Something to feel proud about.
> Let me know when you're on par with Google in terms of revenue or benefits or prestige
Do you really choose your employer based on their revenue? or prestige??
If I’m going to sit through 9 hours of interviews to maybe get an offer, then yes absolutely.
I’m not going to work my rear end off for 4 years to get 0.5% of potentially nothing and go through your dog and pony show of an interview cycle
I would bet a majority of engineers at FAANG companies are there primarily - or in large part - because of the prestige.
Absolutely. Both majorly impact my long term earning potential.
[dead]
A lot of psychological safety here. This is fundamental for team success.
This list mentions A/B testing a few times and it's worth noting that A/B testing is great but it's not free.
I've seen a nontrivial number of smart engineers get bogged down in wanting to A/B test everything that they spend more time building and maintaining the experiment framework than actually shipping more product and then realizing the A/B testing was useless because they only had a few hundred data points. Data-driven decisions are definitely valuable but you also have to recognize when you have no data to drive the decisions in the first place.
Overall, I agree with a lot of the list but I've seen that trap one too many times when people take the advice too superficially.
I think A/B testing is one of the most expensive ways of getting feedback on a product feature.
- You have to make good decisions about what you're going to test
- You have to build the feature twice
- You have to establish a statistically robust tracking mechanism. Using a vendor helps here, but you still need to correctly integrate with them.
- You have to test both versions of the feature AND the tracking and test selection mechanisms really well, because bugs in any of those invalidate the test
- You have to run it in production for several weeks (or you won't get statistically significant results) - and ensure it doesn't overlap with other tests in a way that could bias the results
- You'd better be good at statistics. I've seen plenty of A/B test results presented in ways that did not feel statistically sound to me.
... and after all of that, my experience is that a LOT of the tests you run don't show a statistically significant result one way or the other - so all of that effort really didn't teach you much that was useful.
The problem is that talking people out of running an A/B test is really hard! No-one ever got fired for suggesting an A/B test - it feels like the "safe" option.
Want to do something much cheaper than that which results in a much higher level of information? Run usability tests. Recruit 3-5 testers and watch them use your new feature over screen sharing and talk through what they're doing. This is an order of magnitude cheaper than A/B testing and will probably teach you a whole lot more.
Some teams think they can A/B test their way to a great product. It can become a socially acceptable mechanism to avoid having opinions and reduce friction.
Steve Blank's quote about validating assumptions: "Lean was designed to inform the founders’ vision while they operated frugally at speed. It was not built as a focus group for consensus for those without deep convictions"
Is the Lean Startup Dead? (2018) https://medium.com/@sgblank/is-the-lean-startup-dead-71e0517...
Discussed on HN at the time: https://news.ycombinator.com/item?id=17917479
Any sort of political/PR fallout in any organization can be greatly limited or eliminated if you just explain a change as an "experiment" rather than something deliberate.
"We were just running an experiment; we do lots of those. We'll stop that particular experiment. No harm no foul" is much more palatable than "We thought we'd make that change. We will revert it. Sorry about that".
With the former people think: "Those guys are always experimenting with new stuff. With experimentations comes hiccups, but experimentation is generally good"
With the later; now people would wanna know more about your decision-making process. How and why that decision was made. What were the underlying reasons? What was your end goal with such a change? Do you actually have a plan or are you just stumbling in the dark?
Something that is always a problem for me when doing A/B testing was that the C-Suite just reads the landing page of A/B testing tools where it says things like "Ridiculously easy A/B Testing" and they assume the tool is gonna do everything for them, including changing the page layout by simply adding some HTML in TagManager.
In my career I had the discussion to explain that this is not the case more times than it's appropriate.
A/B testing is possibly the most misunderstood tool in our business, and people underestimate even the effort it takes to do it wrong... let alone to do it right.
> You have to build the feature twice
Erm, isn't it three times, or am I missing something?
You have what you are currently doing (feature Null), feature A, and feature B.
Otherwise, you can't distinguish that the feature is what is causing the change as opposed to something else like "novelty" (favoring a new feature) or "familiarity" (favoring an old feature).
If all you have is "what you are currently doing" as "feature A" and "new thing" as "feature B", you're going to have a murderous time getting enough statistical power to get any real information.
> You have to build the feature twice
Why though? Can't you have it dynamically look up whether the experiment is active for the current request and if so behave a certain way? And the place it looks up from can be updated however?
But you have to implement and test both sides of that "if" statement, both behaviors. Thus "build the feature twice"
Right: you have to take responsibility for implementing (and testing and short-term maintaining) two branches.
Most A/B tests should not be done in a production like way. Grab some post-it notes and and sketch out both A and B: then watch users work with it.
For a lot of what you want to know the above will give better information than a 100% polished A/B test in production. When people see a polished product they won't give you the same type of feedback as they will for an obvious quick sketch. The UI industry has gone wrong by making A/B in production too easy, even though the above has been known for many decades.
(A/B in production isn't all bad, but it is the last step, and often not worth it)
You are pushing for bit of too much of name-misuse. A/B tests are something what the entire point is that you run them on the real world, and gather how real users react.
Design-time user testing has been a thing for much longer than A/B tests. They are a different thing.
I mean, your point stands. But you can't do A/B tests on anything that is not production, those your are recommending are a different kind of tests.
I'll accept your definition correction. However I think my point still stands: there are better things than A/B testing to get the information you need.
It probably helps that one of PostHog's core products is an A/B testing framework, so it's much easier for them to iterate on it internally for what they need to A/B PostHog. Even when you already have a best in class A/B testing framework though, I agree—A/B testing too much or waiting too long for "more data" to make decisions can slow down momentum badly for features that should be no-brainers.
Agree!
Most orgs should just be shipping features. Before starting an Experiment Program teams should be brainstorming a portfolio of experiments. Just create a spreadsheet where the first column is a one-line hypothesis of the experiment. Eg. "Removing step X from the funnel will increase metric Y while reducing metric Z". And the RICE (Reach-Impact-Confidence-Estimation) score your portfolio.
If the team can't come up with a portfolio of 10s to 100s of experiments then the team should just be shipping stuff.
And then Experiment Buildout should be standardized. Have standardized XRD (Experiment Requirements Doc). Standardize Eligibility and Enrollment criteria. Which population sees this experiment? When do they see it? How do you test that bucketing is happening correctly? What events do analysts need? When do we do readouts?
That's just off the top of my head. Most orgs should just be shipping features.
A/B testing is also high risk: a good test produces valuable data, a bad test produces harmful data. A company that does no testing is better off than a company that does bad testing. Many people treat product decisions made based on test results as unimpeachable, whereas they treat product decisions made on a hunch with a healthy skepticism that leads to better outcomes.
You need to have a lot of traffic for it to make sense. And then the effect size needs to be big enough to notice. Not easy.
I know this is not related to the article, which is great, but I am wondering how long "posthog" is going to be the name of this company given what "post hog" means.
I marvel at this every single time i see their billboards. It does mean I read all of their billboards, I guess.
I'm kind of dreading anywhere I work picking up the service b/c of how much I'd have to say the name without laughing or making jokes about it.
These are ok. They're great to highlight the surface area of product building. But the list is very biased from an analytics and testing perspective because posthog product is analytics and testing.
Capturing analytics is a no brainer. however, most data in most products at most companies starting out just fundamentally does not matter. It's dangerous to get in the habit of "being data driven" because it becomes a false sense of security and paradoxically data is extremely easy to be biased with. And even with more rigor, you get too easily trapped in local optimums. Lastly, all things decay, but data and experimentation runs as is if the win is forever, until some other test beats it. It becomes exhausting to touch anything and that's seen as a virtue. it's not.
Products need vision and taste.
18. Instead, forcing PRs into day of work unit, it is better to be minimum testable increment. Some features just need more work to be tested. Forcing everything into tiny tickets make both planning tedious and often introduce bugs in half finished features.
22. I saw design system fail in many companies. It is very hard to get right people and budget for this to succeed. For most startups are better to pick existing UI toolkit and do some theming/tweaking.
27. I disagree, If you put Product manager as gatekeeper to users you will transform the organization into a feature factory. Engineers should be engaged with users as much as possible.
27. I don't think you do disagree. Read point 29: Hire and rely on product engineers. They have full-stack technical skills needed to build a product along with customer obsession. Yes, this means they need to talk to users, do user interviews, recruit tests for new features, collect feedback, do support, and respond to incidents.
> skills needed to build a product along with customer obsession
So a disempowered founder-lite? What's their incentive?
Thought on this one:
> Trust is also built with transparency. Work in public, have discussions in the open, and document what you’re working on. This gives everyone the context they need, and eliminates the political squabbles that plague many companies.
This seems prone to feedback loops; it can go both directions. If there are political squabbles, discussion may be driven private, to avoid it getting shut down or derailed by certain people.
I've seen this. Takes management vigilantly guarding the commons from excessive drive bys and divebombs.
It takes a lot less energy to throw shit than it does to clean shit. There's infinite signals. Big egos take a lot of energy to repel and redirect to maintain it. I think it's absolutely worth it when it's possible, but yeah.
You wouldn't think so until you've done it, but it's really hard to get 6+ adults together where everyone's primary goal in that team is to make a thing. Seems like there's always one or more people who want to peck others, build fiefdoms, hold court.
I just want to say this blog is one of the best engineering/product blogs out there. I’ve been an avid reader for a while and always learn something. Very inspirational.
900 applications down to 4 hires looks like a small percentage to be optimistic about the market...
> Your product is downstream from your ideal customer profile (ICP).
Do not start with an idea. Start with a problem, and then construct a solution. The solution is the product. The problem implies someone who has that problem: this is the customer. How much of a problem it is tells you how much they want it and how much they will pay for it. Because an idea doesn't necessarily have a problem, it results in a product that doesn't necessarily have a customer.
> As 37Signal’s Jason Fried says “You cannot validate an idea. It doesn’t exist, you have to build the thing. The market will validate it.”
Similarly, don't set out to change the world. Put your product into the world, and the world will decide whether to change as a consequence.
The problem with having such a specific, prescriptive formula for success is that it never actually works out that way. Sure, there are high-level principles, the PostHog team executes brilliantly, and I love the product, but I think we're really bad at connecting the dots on what actually made something successful. Instead, we assign credit to random things just to make it all make sense. A lot of times, it's the equivalent of saying, "Bill Gates eats this cereal for breakfast, so if I do that, I should become a billionaire too."
First you get the success and then you write the formula. Easy.
I was always very passionate about programming and startups or small team/co, but I never even got to the first round because of my undergraduate degree. I think I would have tried hard given an opportunity and worked with lot of discipline and passion. So now I have my own small team and I try to see if someone who doesn't have the right background but still willing to learn and is passionate about building stuff. It is probably not the idea of the author and he is right in his approach as it has been established, but I will test and see if what I am trying will work or not.
Wait, i wouldnt hire someone who is ok spending so many hours after the first few hours, it shows me that he or ave can't handle the sunk cost trap and will fall again and again. But ok, that's my wr9ng opinion about this
> If you’re going to pivot, make it big.
This is a great point. I've seem teams apply lean startup by testing -> changing something -> testing -> changing something -> testing ...
The problem is that the changes are so small that statistically you end up testing the same thing over and over and expecting different results. You need to make a significant change in your Persona, Problem, Promise, or Product to (hopefully) see promising results.
In the first image in the article, what is a "SuperDay"?
Is this like a trial day where you're invited to do a day of work for free?
They pay you for it, but it is a trial work day.
Story time. I interviewed for a job at posthog. I knew that I really loved their communication style. But I hadn't used their product and didn't know a ton about them except that their writing is fantastic.
The 'product for engineers' focus that they is cool but when I had an interview, it was clear that I wasn't a 'product for engineering' person.
When they proposed the Super Day. I was like, I'm not sure because it's awesome to get paid for a day, but it's also not an unstressful event. And I sort of said I had some more questions before we moved on to the Super Day.
And they basically just said: we don't think it's going to work out. It was actually a pretty positive experience. I think that they correctly assessed the situation pretty quickly and my hesitation was a real signal to them.
(This is my recollection - I could have the details wrong.)
But yeah, super day is a day of very structured work in the role that they setup. And its paid.
I did a "trial week" at Linear a few years ago, essentially the same thing but for an entire week.
Nice to get paid, but even getting paid is stressful if you're used to working PAYE and not having to think about how to do taxes on foreign income. The process works out very well for Linear, but as a candidate... not so much. It was probably the most stressful week of my professional career. You can't really simulate a typical work day or week with the stress of a sword of damocles over your head.
The thing I was tasked to do was pretty simple on the surface, but had some unexpected dead-end avenues that ate up quite a bit of time. So ended the week with "you're not fast enough". If I'd hit on the correct approach from the start i'd have probably easily finished early, so it really felt like a coin toss to me.
> And I sort of said I had some more questions before we moved on to the Super Day.
> And they basically just said: we don't think it's going to work out.
Ouch, so their tight-knit, no-shortcuts hiring process is only thorough for them, not for the engineer applying.
Perhaps, but it wasn't a bad experience. I've come to value hiring processes and I guess employers where they known how to swiftly make decisions.
Getting paid is, at least, fair. Everyone has some skin in the game. Pretty impressive.
Comment was deleted :(
Comment was deleted :(
There are companies out there that probably do none of these things and are x1000 more successful from a revenue or market cap perspective. Seems like the biggest successes are simply being at the right place at the right time and not being a complete idiot. Nobody wants to hear that though.
Salespeople are what make a successful company, from a revenue point of view. There's only one company that I know of that's been successful at that level without sales: Atlassian. Everyone else has salespeople.
If you if you don't have salespeople then you need to make a product that works and fulfills user needs. And it has to be good enough for word of mouth...which is where posthog's experience comes in.
> There's only one company that I know of that's been successful at that level without sales: Atlassian.
It's not precisely so. They do and have had partners that did sales / consulting work.
Yeah, but I doubt third party partners can get you to an IPO-level scale.
In my limited experience partners are usually small integrators that do maybe a small urban area. Did they get a big consulting partner/implementor or something?
Sure luck plays a role. There are techniques to increase your odds of getting lucky though.
Sam Altman argued a startup's chance of success is “something like Idea times Product times Execution times Team times Luck, where Luck is a random number between zero and ten thousand”
A lot of the success in startups is not due to getting the initial idea right, but what founders do once they realize that their initial idea is wrong.
"Your website is the first impression your product is making" - unless your company does not operate in market where no one cares about your website.
We get serious leads only via networking of our C-levels and sales no customer cares about our landing page and leads when we cared about landing page were not serious and waste of time.
Yep, I really don’t want to read any articles from any company that built their success during the SaaS greenfield period from 2008-2016. Plus, you already have had the brand and market share to built even more upon that foundation.
Now if you’ve built something big that grew in the past few years organically, there’s more to learn from that success.
What, you're saying the secret to success isn't just coming up with cool brand name that ends in -ify or -ly in that era?
I always look for the post with the "success formula" in this situation and can never find it, but luck and timing are components. Also skill, resourcing, execution, and what I'll call "grit". The ratio is not defined but you need components of each.
Comment was deleted :(
I was reading the wikipedia page for WhatsApp this morning and indeed, right place, right time, right talent pool.
#2 and #3 are sort of symbiotic. If you have a bad hire, then it's going to be difficult to give them autonomy.
Sorry, what "successful products"?
And I have to say that "Technical Content Marketer " is one of the most dubious job titles I have ever seen.
Posthog is pretty successful! But since it isn't strictly necessary, perhaps we can make this thread more meaningful by removing success from the title above.
Well, I'd never heard about it before today, but that may be just my bad, as I'm not really much into web stuff. But when I see something like this (from the top of their website):
> The single platform to analyze, test, observe, and deploy new features
my reaction is "Wha?"
But what the hell - though I do think the job title is bad, but then I've had a few of those myself.
I never let startups get away with "The" on Hacker News when I see it. I always replace it with "A" :)
On their homepage, they have their own logo in the section listing companies that use their products. I mean, if you dogfood it, great, but it doesn't exactly instill confidence if you have to reach for your own company to fill out the list.
We use PostHog for site analytics - it's a good product. IDK about popularity, but it's a joy to use.
They have customer testimonials here which have more companies than the front page: https://posthog.com/customers
Seems like a perfectly good description of someone whose job it is to achieve marketing goals by creating content for a technical audience.
[flagged]
They list a few products on their home page: https://posthog.com/products
I've used PostHog and it's pretty good. I don't know if I'd classify all of those as different products, you rarely want one of those without the other.
All that for product analytics? Lmao
[dead]
Crafted by Rajat
Source Code