Writing a recursive descent for this would require writing 16 functions, and you'd end up spending most of your time cycling through the functions to finally come across the one which applies for the given situation.
I've written straight-forward expressions parsers as you suggest, but when I had to do it for systemverilog, I used a classic shunting yard parser. You see the operator, compare its precedence against the stack and you know immediately what to do, vs possibly drilling down 16 levels of function calls to figure out what to do.
Another advantage of table-driven expression parsers is you can bail in error cases without needing to unwind countless levels of stack.
Four years ago the world was coming out of covid and supply chains were screwed for years. Much of the inflation was not due to any particular policy, but just fallout from what COVID did to world markets.
The reasons for high oil prices now are a completely different cause.
I read the book when it first came out. In 1986 I took a job at a new company and Carl Alsing, who was the manager of the microkids (and had written every bit of microcode for machines that came before that) was in the office next to my cube. In fact, he was one of the people who interviewed me for the job.
So I reread the book and my esteem for Kidder's writing went up even more. In the parts of the book where he described Alsing's appearance and demeanor were spot on and captured essential things about Alsing without using a lot of words.
One of the things I recall is Kidder said something like, "Alsing is a tall man, but his mild demeanor and hunched posture presents a much less imposing figure." Sure enough, that is exactly the experience I had with Carl.
I think my only crossing of paths with someone from Data General was actually only a few years ago. A startup was building cutting-edge phototonic computing tech for AI, and one of the key people for the electronic hardware side was a graybeard from DG. Nice mild-mannered guy, and very capable and sensible. I recall a major tapeout working the first time.
(They also had an engineering executive who had been a computer engineer from a major CPU company. In one engineering reporting meeting, when a team mentioned they needed to do something with a particular facility of the off-the-shelf CPU, the executive volunteered that he could help with that, since he designed it. Everyone laughed.)
Hardware companies are a mixed blessing for us software people, but I wonder whether hardware engineers are more likely to keep it real (old-school high-powered engineer style) than software people?
I too had a crossing of paths with one of the micro kids. She was descibed in the book as the lone female engineer on the team. She is a very sharp and talented individual and was a pleasure to work with.
I'm always gobsmacked when Trump says things like, "We need to get rid of all the wind turbines! They are killing all the birds! Look at the foot of any tower and you'll see nothing but dead birds!"
Is there a single person who things Trump gives a single damn about the birds? It is obviously just a pretext.
> Is there a single person who things Trump gives a single damn about the birds? It is obviously just a pretext.
This can be seen by the changes to the interpretation of the 1918 Migratory Bird Treaty Act (MBTA) his administration made in 2017 during his first term.
Briefly, they said it only prohibited intentional killing of birds. So say I wanted to pave over some wetlands that are a crucial nesting grounds for some birds that are covered by the MBTA to build a parking lot.
Before, the near universal interpretation of the MBTA by nearly everyone in any of the countries that are a party to the treaty (US, Canada, Mexico, Japan, and Russia) was that I can't put my parking lot there.
Under the Trump interpretation as long as I'm not building my parking lot there to intentionally kill the birds I can do it.
This was overturned in court in 2020. Just before leaving office in 2021 they tried to again make that the interpretation.
Wind turbines are also miniscule compared to issues like pollution, land use, windows, and cats. Also you can track migration and turn them off at key times if it's a huge issue (this is part of the motivation for research I'm going to do later as part of my master's dealing with tracking hawk flocks via weather radar).
Wind turbines are an issue but approximately 0% of the 30% decline in US birds since the 1970s
Edit: to be specific to Trump, funding for bird conservation has been an issue under his administrations and he's weakened things like migratory bird treaty act. Obviously he doesn't care about birds and the bird community is very frustrated with him
Never thought about it, but that's a great point and comparison. From quick Google search: 365 million and 988 million birds die every year from window collisions (that's US alone). Windmills/turbines: 140,000 and 679,000. Then if you do per windmill vs. per building obviously the windmills are going to "win," but it's the absolute that would seem to matter in this case.
As you said, that has nothing to do with the actual preference for fossils vs. turbines, but a great point nonetheless.
Domestic cats kill on the order of 100x as many birds as windmills do.
Fossil fuels also kill millions of animals every year (not just birds), and harm the health of humans. Even ignoring the long-term effects of CO2, fine particulates cause respiratory problems, higher blood pressure, and can cause cancer. The tricky bit is you can draw a straight line from the burning of coal to any particular (heh heh) death, it is just a statistical shift in health outcomes.
Anyway, all of that absolutely dwarf the birds getting killed by wind farms.
Yeah, I'm your parent and I think I wrote that reply without reading it over because I was attempting to point out, with numbers, how absurd it is that anyone would say "windmills are bad because of how many birds they kill" as if that's a logical argument vs. the countless other things that kill birds en masse.
Yeah, see the reply I left with your sibling. I am in full agreement with you and wrote my comment way too quickly because I was trying to rebut the argument that windmills are in any way responsible for some crazy number of bird deaths.
I read the summary but not the paper and it seems like it has nothing to do with physical design. This is a means of making the elaborate/compile/simulation performance of the language faster.
Say someone wrote this code:
wire [31:0] a, b, c;
for(i=0; i<32; i=i+1) begin
assign c[i] = a[i] & b[i];
end
it sounds like this paper is about recognizing it could be implemented something akin to this:
wire [31:0] a, b, c;
assign c = a & b;
Both will produce the exact same gates, but the latter form will compile and simulate faster.
In section 4.4 it discusses the effect of the technique on Cadence Genus, which is a PD/synthesis tool. My point is that you have to flatten the graph at some point, and most of the benefit of flattening it later (keeping/making things vectorized) is to do higher level transformations, which are mostly not effective.
Well, to be fair, the authors propose this thesis: "Although the vectorization of Verilog designs does not change the hardware they describe, it reduces their symbolic complexity, enabling faster and more scalable analysis and verification."
Maybe it doesn't help Design Compiler turn your shitty design into gold, but faster verification is an unalloyed good.
What kind of performance impact does it have? Obviously it depends on the specific program, but let's say the worst case scenario, something like a recursive implementation of the factorial function.
Minor. Faster unpacking of @_, but it's not a huge win until you have a lot of arguments. The conventional Perl 5 interpreter has no JIT to leverage the benefits of stronger types, inline functions, unroll loops, etc. A factorial function has few arguments, so the unpack gain will be small to nothing.
If you are in your terminal home, then yes, selfishly one would want the value to go up. But if you ever plan on moving to another home, sure dropping prices mean you get less, but it also means you pay less for your next purchase.
If you are in your terminal home, you also want low prices until the week before you eventually sell your house, as Texas has a high property tax rate to make up for the lack of state income tax.
I have mostly been using the Claude Sonnet models as they release each new one.
It is great for getting an overview on a pile of code that I'm not familiar with.
It has debugged some simple little problems I've had, eg, a complex regex isn't behaving so I'll give it the regex and a sample string and ask, "why isn't this matching" and it will figure out out.
I've used it only a little for writing new code. In those cases I will write the shell of a subroutine and a comment saying what the subroutine takes in and what it returns, then ask the LLM to fill in the body. Then I review it.
It has been useful for translating ancient perl scripts into something more modern, like python.
Many of the workforce he laid off were content moderators -- I've read it was a serious effort with a large number of people doing thankless work. There is now way more anti-Semitic content on X, more racial insults, etc.
Side point, but you'll find plenty of anti semitism on HN in the Israel articles that have many comments - it comes in the form of conspiracy comments that people reply with, that use mossad, pedophilia, Netanyahu and the US in the same sentence. Any replies calling it out become greyed from downvotes.
It's just not viewed as anti-Semitism, probably in the same way that the posts on X aren't viewed as far-right or extremist.
Extremists usually don't experience their views as extreme, but as rational and important.
Well not just content moderators, but he gutted Trust and Safety and the content moderation function of the company, which is surprisingly larger than the moderators themselves. Having worked peripherally with similar departments that had multiple teams, even though a lot of it comes down to human moderators, there is a ton of technology around the moderators, and even more keeping the content getting to them in the first place.
Firstly, this is a red queen’s race because like security, new types of unwanted content, threats and risks keep arising as the information (and misinformation) landscape and overall zeitgeist keeps shifting. The work is never done and the best that can be done is to build platforms and frameworks to streamline it. There is also a lot of fractal complexity everywhere.
E.g. there’s a ton of technology needed to support the moderators themselves. Infrastructure like review queues to enable them to rapidly handle content classified by type, risk level and priority. Like Jira but not Jira because it can’t scale to the number of queues and issues involved here. So you basically re-implement and maintain a Greenspun’s 10th rule version of Jira.
There is still a huge amount of invisible complexity beyond that. For instance, you need to manage how much of a certain type of content gets exposed to a given moderator because some types (CSAM, gore) lead to burnout and PTSD. You also need to blur these things.
(Also the same type of content often gets reshared, so you need things like reverse image search to auto-filter that, because running the whole pipeline each time is expensive.)
This of course necessitates a ton of machine learning. Because risks keep shifting, and (pre-LLMs) each type requires the entire ML lifecycle and related infra: collecting and cleaning data, building classifiers for them, deploying them, seeing how well they work, and tuning them, and then replacing them when the bad actors eventually adapt to newer means.
ML is also of course needed for bots, spam and scams, which keep evolving. Entirely different techniques here though.
Then there is all the infra needed to handle the fallout of moderation. Counting strikes against users, dealing with their complaints, handling escalations, each case with a long history of interactions that needs to be collated for quick evaluation. Easier said than done because of course the backend is not an RDBMS but a bunch of MongoDB-alikes because webscale.
And all of this is a signal for the ranking used for feed, the main product, which keeps evolving, so a ton of “fire and motion” happening there. You introduce a new feature in the feed? You just introduced a dozen different abuse vectors.
Then there are policy makers and the technology needed to support them. Policy is always shifting as the landscape is shifting. This also includes dealing with regulations, which are also often shifting and require ways to deal with legal requirements and various legal systems like NCMEC. And this varies by jurisdiction. Like not just by countries, sometimes even by states.
(Funny story about NCMEC – it has an API to report CSAM, but I could not find it. So I googled something like “child porn API” and got a blank results page. Pretty sure I’m now on a list somewhere.)
I could go on and on. And I wasn’t even working in this area, just supporting these teams! Admittedly in our case I'd put the relevant headcount in the hundreds and not thousands, but our scale was also very different. For a company that is ENTIRELY about user-generated content at massive scale, up to national-level events like Arab Spring -- even if there was a lot of bloat -- I would not be surprised to learn this function was the majority of the workforce.
And Elon killed pretty much all of this. And, well, we see the results everyday.
I get that he shredded trust & safety, and that Twitter got way worse afterwards in that regard. But he fired more than half the workforce, and they were not mostly T&S people.
I dunno, most reports from the time (and a quick Google AI overview just now) mentioned the cuts largely focused on T&S and moderation teams. Even the ML teams he cut reprotedly were working more on safety and integrity issues. Many who worked on "woke" issues were also cut, but the line between T&S and "woke" gets blurry quickly.
To be fair, this could be due to the bias in reporting, as media outlets may have had incentives for over-emphasizing the T&S angle.
I do not deny there was bloat. There was bloat in most tech firms at the time. But I don't think it was 80% bloat. My post was to explain how, even if T&S / moderation seems like a small function, it can require an unexpectedly large headcount -- probably even more for a pure-UGC company like Twitter -- and so could realistically account for the bulk of the cuts.
Come on. Zillions of developers have complained about getting RIF'd. It's not a mystery. I don't like Musk's Twitter. I don't like Musk. But pretending isn't getting us anywhere.
I'm not sure I follow. Assuming you mean the zillions of developers that got RIF'd at Twitter, do we know how many were bloat versus working on the T&S and related functions? I tend to believe the latter based on media reports and because that has clearly had an impact on the product.
It's OK if our premises are too far apart to hash this out. No, I don't think shredding T&S is one of the principal components of the giant Twitter RIF. Yes, T&S got killed; yes, that's bad. No, you can't explain how Musk manages to keep Twitter technically functioning as well as it does by pointing to T&S.
Totally fair. That said, I'll leave a mention of a plausible theory I have for how Elon -- and the rest of the industry have been managing to keep things running with all these layoffs:
Again, I'll admit Twitter and all other companies had bloat. But based on these industry-wide reports about record levels of burnout, inside knowledge of at least one company that I thought had unjustified layoffs, and a large number of conversations I've been having with connections across the tech industry, I think these layoffs have long gone far beyond the bloat.
You are strawmanning what I said. I said "Many of the workforce he laid off were content moderators" but you are arguing against something I didn't say, which was that those Musk laid off were mostly T&S people. Those are two different claims.
The people who have claimed for decades that there is rampant cheating have spent years and millions of dollars and have found so little that it actually proves the case against their claims. Further, it has been shown that what sounds like reasonable checking ends up preventing 100-200 legitimate votes for every one illegal vote prevented.
HN guidelines say not to get political, but it is hard to avoid in this case because it is one party which is claiming widespread voter fraud. Let's start with a simple case. Tell me which of these facts is not true:
* Donald Trump has claimed and continued to claim millions of illegal votes have been made against him, including millions by illegal aliens. The same claim, perhaps not using such large numbers, has been widely and frequently repeated by conservative media
* Donald Trump became president in 2017 and had the might and resources of the full federal government to root out voter fraud
* Donald Trump aggressively prosecutes his self-interests, and millions of illegal votes against him would be against his self-interest
* As president, it is not just in his personal interest but is part of his duty to ensure voting is fair
* Trump appointed Kris Kobach (more on him later), the AG of Kansas, to form a commission to get to the bottom of the rampant voter fraud
* Nothing of note was produced by the commission ... it just kind of petered out
One must conclude one of three things:
(1) Trump was negligent in his duties by not investigating the issue
(2) Trump or his subordinates were incompetent in their investigation of the issue
(3) Voter fraud is not common. I'll leave it to speculation whether this was an honest mistake on the part of conservatives or if they were lying for political gain
Read the wikipedia article about these issues relative to Kobach. Even before Trump, he was banging the drum as Sec of State for Kansas, claiming he knew of more than a hundred cases and asked for special powers to find the thousands of cases he knew were happening in Kansas. He was given authorization to do that investigation. How did it turn out? Start reading here:
> At that time, he "said he had identified more than 100 possible cases of double voting." Testifying during hearings on the bill, questioned by Rep. John Carmichael, Kobach was unable to cite a single other state that gives its secretary of state such authority.[153] By February 7, 2017, Kobach had filed nine cases and obtained six convictions. All were regarding cases of double voting; none would have been prevented by voter ID laws.[154][104][155] One case was dropped while two more remained pending. All six convictions involved older citizens, including four white Republican men and one woman, who were unaware that they had done anything wrong.
The rest of it is similar, and all confirmed only that voter fraud is rare. But worse than that is his tactics, which have been adopted by many states, disenfranchises 100x more legal voters than illegal voters it catches. And statistically, it disenfranchises Democrats in far greater proportion than Republican voters (35% vs 23% of the affected voters).
Here is another useful quote, along with a citation, on this topic from that same wikipedia entry:
> A Brennan Center for Justice report calculated that rates of actual voter fraud are between 0.00004 percent and 0.0009 percent. The Center calculated that someone is more likely to be struck by lightning than to commit voter fraud.[156]
I’m not saying we have widespread voter fraud. My gut feeling is that we don’t. But I’m a very trusting person. I always believe people when they ask for money on the street because their car broke down. I don’t know how you can confidently say there isn’t meaningful voter fraud.
How would you even verify past elections? You can point to millions spent on commissions and lawyers, but those can’t go back and generate data that was never contemporaneously collected.
Think of it in terms of computer security. You had a telnet server exposed to the internet for years. You have no logs, and the machine got scrapped before you ever got access to it. How would you do a security audit to determine if anyone broke into the server? You could spend millions on a commission and have the commission declare there was no security breach, but that would be for show, right?
You say people don’t look too hard for tax evasion, but people don’t look very hard for voter fraud as the voting is happening. And by its nature it’s something that you can’t reliably look for after the election has happened.
I think you need to start with proposing how a person could fraudulently vote.
If you show up to the polling place, you need to list the name and address of a registered voter in that district. How do you know this information?
If you use a relative or acquaintance whose name and address you know they're registered at, when they show up to vote it will be noted that they have already voted. They can then put in their preliminary ballot, and presumably their signature will more closely match the fraudulent one and the real one will be counted.
There are enough basic hurdles to this that I don't see how it can even be done at scale.
The official website says they collect either a driver's license number, state ID number, or the last 4 digits of your Social Security number. With that it should be trivial to flag potentially fraudulent applications for further investigation.
Do you have a source that says they don't use that information for verification?
"An official list of citizens to check citizenship status against does not exist. If the required information for voter registration is included – name; address; date of birth; a signature attesting to the truth of the information provided on the application; and an indication in the box confirming the individual is a U.S. citizen – the person must be added to the voter registration file. Modifying state law would require an act of the state legislature, and federal law, an act of Congress. Neither the Secretary of State nor the county auditor has lawmaking authority."
> That does say anyone can challenge a registration.
Yes, it does. But who and how is someone going to challenge 100,000 registrations? This issue was brought up in the paper, and people objected to it saying such was an invasion of privacy.
I always wondered (Clearly Not North America) How does one get on a list anyways? I would imagine getting on a list fraudlently leaves paper trail and this would have been discovered in 5 minutes retroactively, but I'm still curious.
When you register to vote, you give your address as well as proof of eligibility to vote. That address is used to assign you a polling place, and also as an additional piece of data needed in order to filter out fakers. Your voting eligibility is checked before being added to the list, which also mitigates fakers.
If you're trying to register in someone else's name, you have to pray that they don't register themselves or show up to the polls to vote. That's a gamble which prevents systematic individual voter fraud.
Yes, it's unlikely that people are illegally voting in person in large numbers. It is relatively easy to do so, and the risk is relatively low, if you approach it intelligently (e.g. vote as someone who is registered, but highly unlikely to vote -- even if they do vote, you're highly unlikely to be caught anyway). However, there's just no incentive for individuals to do so, because the reward is very low: each individual's vote is really worth very little, and an individual fraudulent voter does not benefit from it enough to counterbalance the risk.
On the other hand, there are other ways for people to steal elections. For example, you can steal mail-in ballots from mailboxes, fill them, and covertly drop them in. It's particularly easy to do in states where all ballots are mail-in by default. The risk-reward calculation is different, because now one organized person can cast dozens, or hundreds of fraudulent votes, instead of just one.
In other states, you don't even need to steal them: you can just knock on the door, ask people for ballots (or buy them, many people will happily sell their right to vote for $20, because it's worthless to them), fill them in, and drop them off completely in the open. Of course, the stealing/buying and filling in the ballots is illegal, but since this happens in private, it's much harder to detect and prosecute. That's why most states disallow dropping off votes for third parties, but some states inexplicably allow it.
There are multiple recent cases, where people were convicted for schemes like that, e.g State of Arizona v. Guillermina Fuentes, Texas v. Monica Mendez, Michigan v. Trenae Rainey, U.S. v. Kim Phuong Taylor, and more. Since these are only the cases where conviction was secured, the true number is much higher.
Buying ballots on a large scale seems difficult to me, because you have to keep a large group of strangers from talking. They will brag to their friends and family members and the information will come out. I can only imagine people buying a few ballots from their apolitical family members.
So... For each election, I have to register anew and the agency in charge has a backoffice is cross-checking this against... something? I guess they would first look if I was voting the last time? What if my birth certificate or whatever is from a different place. Do they assume I'm not risking using a forgery over politics (it's a fair assumption I would say)?
My original birth certificate was old and had decayed, so I wanted a new one. I googled "how do I get a copy of my birth certificate", followed the instructions, and received a brand new certificate.
(I was a bit concerned because the hospital I was born in had been razed and the whole area redeveloped 50 years ago, but there was no problem.)
A couple weeks ago I went to the nearest DMV and got a RealID. It took 15 minutes. (The RealID is proof of citizenship and residency.)
The DMV people and the people in the passport office are very helpful in how to get the necessary proof.
>The DMV people and the people in the passport office are very helpful in how to get the necessary proof.
That's nice and matches my obviously-not-north-american experience. Have you considered that you are not the target audience of the voter suppression because of something ?
No. You register once and that applies to all future elections (at least until you update your registration for whatever reason, e.g. because you changed addresses).
> and the agency in charge has a backoffice is cross-checking this against... something?
Against the state's voter registration database, usually maintained by that state's Secretary of State or equivalent.
> What if my birth certificate or whatever is from a different place.
If the birth certificate is from somewhere within the US, then validating the birth certificate is usually just a matter of contacting the county clerk where you were born. If it's from somewhere outside the US, then you ain't eligible to vote anyway unless you've gone through the process of becoming a naturalized citizen — in which case you'd have more appropriate identifying documents that you'd use in place of your birth certificate.
>If it's from somewhere outside the US, then you ain't eligible to vote anyway unless you've gone through the process of becoming a naturalized citizen
It's nitpicking, but you can be a citizen by birth without either having a birth certificate from a country you are citizen of and without naturalizing, but you will have some other document in that case too.
>Against the state's voter registration database, usually maintained by that state's Secretary of State or equivalent.
Isn't it circular? To be in the database you are checked against the database?
https://www.academia.edu/figures/3550818/table-2-operator-pr...
Writing a recursive descent for this would require writing 16 functions, and you'd end up spending most of your time cycling through the functions to finally come across the one which applies for the given situation.
I've written straight-forward expressions parsers as you suggest, but when I had to do it for systemverilog, I used a classic shunting yard parser. You see the operator, compare its precedence against the stack and you know immediately what to do, vs possibly drilling down 16 levels of function calls to figure out what to do.
Another advantage of table-driven expression parsers is you can bail in error cases without needing to unwind countless levels of stack.
reply