Edit JS + CSS on Data tab's JS code not running on an A/B Test Page Variant

Hey guys,
I have an A/B Test running on my homepage. In order to report to analytics about which page I’m on, I went to the “JS + CSS” Button inside the data tab and under " if (Builder.isBrowser)" code block, I wrote code that reports to analytics.

“context.analyticsManager.track(‘homepage1’)” or “context.analyticsManager.track(‘homepage2’)”

The problem is that only the code for the default variant is running.
Why does this happen? Is this a bug?

Another route I took is to inject my analyticsManager to my window in my Next.JS side, and then using a “Custom Code” block on each page. Each block runs a script that either tracks homepage1 or homepage2
The problem here was that now BOTH script blocks run, regardless of the chosen variation. How could a script block that is only alive in my default variation runs when I’m on “Variation 1”?

What would be the easiest way for me to run different code blocks for each variation? Is there a way for me to know which variation was chosen inside my code? (I’m not talking about the winning variation because I don’t have a winning one yet)

Ron & The easyplant team

Hi @RonHagafny,

We hope you’re doing well!

Could you please help us with sharing the builder content entry link where you’ve implemented this?


Builder content entry is:

You can find “Homepage1 Script” code component and “Homepage2 Script” code component at the bottom of the tree in my “Default Variation” and “Variation 1”, respectively

I have removed the code inside both sections of “Edit Content JSS + CSS” from both sections since it has proven to not work so far

Your post here is very timely @RonHagafny! We’re running our first homepage A/B test and are having the same issues with determining whether the Default or Variation 1 is what is rendered. FWIW, we’re using custom code in Next.js to trigger an event to Amplitude on page load and when I look at the JSON that is returned, I see a data block that has the Default blocks and data, and then there’s a variations block that has the Variant 1 blocks and data, but I can’t tell from what I get back in that block which is the actual version that got rendered. I’ll be interested to see @garima 's response.

1 Like

Hey @chtbks-jason, Hope you’re doing awesome!

Thanks, @RonHagafny for sharing this. We have also implemented this and are able to reproduce it. We really appreciate you reporting this possible bug to us, such feedback will certainly help us improve our product. We have raised a ticket for our dev team to investigate this further and deploy a fix, we will try to update you as soon as the fix will release. Thank you!

In the meantime, please check out this post:- Tracking Builder data to other analytics providers if helpful as a workaround!

1 Like

In using the link you provided, I’m seeing the following issue:

In using this with Next.js, content.testVariationName is constantly coming back as an empty string. Is there something I need to do to make sure this metadata is populated?


(FWIW, it’s typically populated on localhost, and I’m thinking that may be because Next.js runs fully SSR on local dev, and it could be that the content metadata isn’t populated for SSG pages?)

@chtbks-jason ,

I would love to see if that solution ended up working for you, as it seems as a good work around for now on paper. Please let me know

@garima , thanks for the quick reply

1 Like

Hey @chtbks-jason ,

Sorry for any inconvenience. Looks like this is a known issue with testVariationName returning as a blank string. Thank you for raising this issue! I have created a ticket internally and forwarded it to our dev team to prioritize and fix, hopefully soon!

Thanks again!

@garima For what it’s worth, I think adding some kind of indicator to the pageData that comes from the builder.get call (in React/Next.js) as to which variation was loaded would probably be harder, but more important than testVariationName coming back populated. But totally not my call. :wink:

Actually, @garima, can you check on something for me related to this? Is the builder.tests.{{id}} cookie reliable? As in, does it always have the id of which variation is used as the value? That might be an alternative way to be able to grab what we need.

@RonHagafny If you follow my posts, you’ll see I’m not finding any success with the various approaches unfortunately.

I don’t know if you guys use SSG/ISR or SSR for your pages. If you’re using SSR, theoretically that cookie data should be available in the getServerSideProps context, but I don’t know yet if the cookie containing the loaded variation is available in time for that evaluation.

I’d love to compare notes some time as I really like how your site works and I feel like it’d be nice for all of us using Builder with Next.js to share best practices.

@chtbks-jason, We recently added a doc for cookies, you can check this out: Cookies - Builder.io

Also, soon we are updating the doc for setting up the builder with the next 13 including best practices!

Thanks @garima. Any updates on getting the testVariationName to be populated?

@chtbks-jason, Email you the current status, hopefully, soon it will deploy!

@chtbks-jason , we are only using SSG, so unfortunately that solution would still not work for us.

@garima , I’d love to get those updates soon. As it stands, we’re still unable to get credible results for our homepage A/B Test

@RonHagafny FWIW, the cookie idea with SSR pages proved to be pretty fruitless too. There is a support ticket that is being prioritized and if Garima doesn’t beat me to it, I’ll let you know when that gets updated.

1 Like

Hey @garima , I did notice that my scripts on my variants no longer run together, and that actually solves the problem for me.

I do understand that a fix has been made on your side. Would you guys mind sharing what was done?

@RonHagafny, We hope you’re doing well!

Sorry for the delayed response on this - missed your follow-up notification!

Honestly, no major fix was pushed from our end as on our test project-seeing contentLoaded get fired only once, which means we were not able to reproduce the script running multiple times. However, in the past few weeks, we have updated so many things possibly which fixes the issue at your end.

We have ongoing direct conversations with another user Jason around their A/B test use case issues, let us know if the issue persists at your end!