The Importance of Statistical Significance in Shopify A/B Testing

Have you ever made a change to your Shopify store, seen a small bump in sales, and wondered if it was just luck or truly effective? Maybe you changed a button color from blue to green and sales went up 5%. Was that a real improvement or just random chance?

For many store owners, the answer is often “I don’t know” – and that’s a costly problem. Without understanding statistical significance, you might be wasting time implementing changes that don’t actually work, or worse, missing out on changes that could dramatically boost your conversions.

By the end of this article, you’ll understand:

  • What statistical significance really means for your Shopify store
  • How to run A/B tests that deliver reliable, actionable results
  • Which tools and metrics matter most for your testing program
  • How to avoid common testing mistakes that waste your time and money

Ready to stop guessing and start knowing what truly works for your store? Let’s dive in!

Understanding the Fundamentals of A/B Testing

Before we jump into statistical significance, let’s clarify what A/B testing actually means for your Shopify store. This section will give you the foundation needed to understand why statistical methods matter.

Understanding the Fundamentals of A_B Testing - visual selection

A/B testing (sometimes called split testing) is simply showing two different versions of something to similar groups of visitors to see which performs better. In the context of your Shopify store, you might test:

  • Different product page layouts
  • Various checkout processes
  • Different pricing displays
  • Alternative product descriptions
  • Various call-to-action buttons

The control version is your current design or element, while the variant is the new version you’re testing against it. The key to proper A/B testing is that visitors are randomly assigned to see either version, creating two comparable groups.

For a test to be meaningful, you need to:

  1. Change only one element at a time
  2. Define clear success metrics (conversion rate, add-to-cart rate, etc.)
  3. Ensure proper visitor randomization
  4. Run the test long enough to gather sufficient data

Now that we understand what A/B testing is, you’re probably wondering how to tell when a test result is real versus just random chance. That’s where statistical significance enters the picture – and it’s about to transform how you optimize your store!

The Concept of Statistical Significance in Ecommerce

In this section, we’ll break down what statistical significance actually means for your Shopify store in simple terms – no complex math degree required!

Should I trust the A/B test results?

Statistical significance is essentially a way to determine whether the differences you observe between your control and variant are likely to be real or just due to random chance. In simpler terms, it helps answer the question: “Is this improvement something I can trust, or did I just get lucky?”

When running A/B tests on your Shopify store, you’ll often hear about these key concepts:

  • P-value: This is the probability that the difference you’re seeing is due to chance. The standard threshold is p < 0.05, meaning there’s less than a 5% probability the result is random.
  • Confidence level: This is the flip side of the p-value. A 95% confidence level (corresponding to p < 0.05) means you can be 95% sure your result is real.
  • Null hypothesis: The assumption that there’s no difference between your control and variant versions.
  • Alternative hypothesis: The proposition that there is a real difference between versions.

In practical terms, when your A/B testing tool says your test has reached “statistical significance,” it means you can be reasonably confident that the difference in performance between your versions is real and not just random fluctuation.

But why should you care about these technical concepts? Because they’re the difference between making changes that actually boost your bottom line versus changes that just waste your time. Let’s explore why this matters so much for your Shopify store…

Why Statistical Significance Matters for Shopify Store Owners

At this point, you might be thinking, “Can’t I just run a test for a week and go with whatever looks better?” This section will show you why that approach could be costing you significant revenue.

Consider this scenario: You test a new product page design and after a week, it shows a 10% increase in conversions. Without understanding statistical significance, you might implement this change immediately – but what if that 10% increase was just random luck?

Statistical Significance for Shopify Owners

Here’s why statistical significance should matter to you as a Shopify store owner:

  • Avoiding costly mistakes: Implementing changes based on random variation can actually harm your conversion rate over time.
  • Resource efficiency: Redesigning pages, updating copy, and changing site elements takes time and money. Statistical significance ensures you’re investing these resources in changes that actually work.
  • Competitive advantage: While your competitors might be guessing, you’ll be making data-validated decisions that compound over time.
  • Customer satisfaction: Making proven improvements leads to better customer experiences and higher trust.
  • Incremental growth: Small, statistically significant improvements add up to substantial gains over time.

Consider this real example: An online retailer implemented a new checkout design that appeared to increase conversions by 5% in a quick two-day test. They rolled it out site-wide, only to discover over the next month that conversions actually dropped by 3%. The initial “improvement” was just statistical noise, but it cost them real revenue.

Now that you understand why statistical significance matters, you’re probably wondering which metrics you should actually be tracking in your Shopify tests. Let’s explore the key performance indicators that will give you the most valuable insights…

Key Metrics for A/B Testing on Shopify

Not all metrics are created equal when it comes to A/B testing your Shopify store. This section will help you identify which numbers actually deserve your attention.

The metrics you choose to track should align with your business goals, but these primary conversion metrics are essential for most Shopify stores:

  • Conversion rate: The percentage of visitors who complete a purchase
  • Add-to-cart rate: The percentage of visitors who add products to their cart
  • Average order value (AOV): The average amount spent per order
  • Revenue per visitor (RPV): Often the most comprehensive metric as it combines conversion rate and AOV

Beyond these primary metrics, consider tracking these secondary indicators:

  • Bounce rate: The percentage of visitors who leave without taking any action
  • Time on page: How long visitors engage with your content
  • Cart abandonment rate: The percentage of users who add items to cart but don’t complete purchase
  • Click-through rates: For specific buttons or product recommendations

When setting up your tests, it’s crucial to prioritize metrics based on your specific business objectives. For example:

  • A new Shopify store might focus on add-to-cart rate to build their initial customer base
  • An established store might prioritize AOV to increase revenue from existing traffic
  • A luxury brand might care more about margin than pure conversion volume

Remember that leading indicators (like click-through rates) can give you early insights, but lagging indicators (like actual purchases) are what ultimately matter to your bottom line.

Now that you know what to measure, you’re probably wondering how to actually set up these tests on your Shopify store. Let’s explore the practical implementation steps…

Setting Up Statistically Valid A/B Tests on Shopify

Getting your test setup right is critical for generating reliable results. This section will walk you through the practical aspects of implementing A/B tests on your Shopify store.

Shopify offers some native A/B testing capabilities, but they’re somewhat limited. For comprehensive testing, you’ll typically want to use third-party tools. Here are your main options:

  • Shopify’s native capabilities: You can create different product descriptions or images and compare performance, but this isn’t true A/B testing with random assignment.
  • Google Optimize: A free tool that integrates with Shopify and allows for true split testing.
  • Dedicated testing apps: Tools like Convert, VWO, or Optimizely offer more advanced features but come at a cost.
  • Growth Suite: Offers integrated testing capabilities specifically designed for Shopify stores.

When setting up your tests, follow these steps for statistical validity:

  1. Calculate required sample size: Use a sample size calculator to determine how many visitors you need based on your store’s current conversion rate and the minimum improvement you want to detect.
  2. Set up proper tracking: Ensure your analytics are correctly configured to track your chosen metrics.
  3. Implement random assignment: Your testing tool should randomly assign visitors to either the control or variant.
  4. Plan test duration: Based on your traffic volume and the calculated sample size, determine how long your test needs to run.
  5. Consider traffic allocation: For most tests, a 50/50 split works well, but you might choose different allocations in some cases.

A common mistake is ending tests too early or running them for too short a period. As a rule of thumb, even if your test reaches statistical significance quickly, consider running it for at least 1-2 weeks to account for day-of-week variations.

With your test properly set up, you’ll need tools to determine when you’ve reached statistical significance. Let’s look at the calculators that can make this process easier…

Statistical Significance Calculators for Shopify

You don’t need to be a statistician to determine if your test results are valid. This section introduces tools that do the heavy lifting for you.

Statistical significance calculators help you determine whether your test results are reliable enough to act upon. Here are some popular options for Shopify merchants:

  • Shopify A/B Test Significance Calculator: Available from Replo.app, this calculator is specifically designed for ecommerce tests.
  • ABTestGuide Calculator: A simple, free tool that handles the basic calculations.
  • VWO’s A/B Test Significance Calculator: Offers more detailed statistical information.
  • Optimizely’s Sample Size Calculator: Helps you plan how many visitors you need before starting your test.

To use these calculators effectively:

  1. Input your control and variant visitor counts
  2. Enter the number of conversions for each version
  3. Review the calculator’s output for confidence level and statistical significance
  4. Look for a confidence level of at least 95% (equivalent to p < 0.05) before making decisions

Be cautious about calculator limitations. Even with statistical significance, consider these factors:

  • Is the observed difference meaningful from a business perspective?
  • Has the test run long enough to account for day-of-week and time-of-month variations?
  • Are there any seasonal or promotional activities that might be skewing results?

Now that you can calculate significance, you might be wondering what elements of your Shopify store are worth testing in the first place. Let’s explore the most impactful testing opportunities…

Common A/B Testing Elements for Shopify Stores

Not sure what to test first? This section highlights the Shopify store elements that typically deliver the biggest wins from A/B testing.

While you can test almost any element of your store, focusing on high-impact areas will give you the best return on your testing investment:

Product Page Elements

  • Product images: Number of images, image style, zoom functionality
  • Product descriptions: Length, style, format (bullet points vs. paragraphs)
  • Pricing display: Size, color, placement, with/without strikethrough prices
  • Reviews section: Position, display format, highlighting specific reviews

Call-to-Action Elements

  • Add-to-cart buttons: Color, size, text, position
  • Urgency indicators: “Limited stock,” “Popular item,” shipping timelines
  • Cross-sell/upsell offers: Position, presentation, timing

Checkout Process

  • Cart page layout: Single-page vs. multi-step checkout
  • Payment options: Variety, display, positioning
  • Shipping options: Presentation, free shipping thresholds
  • Trust badges: Type, placement, size

Navigation and Site Structure

  • Menu organization: Categories, dropdown vs. mega menu
  • Search functionality: Prominence, autocomplete features
  • Mobile navigation: Bottom bar vs. hamburger menu

Remember that mobile and desktop experiences often require separate testing, as what works well on one device may perform poorly on another.

With so many potential testing opportunities, how do you decide what to test first? Let’s look at frameworks for prioritizing your testing efforts…

Test Planning and Prioritization Frameworks

With countless testing possibilities, you need a systematic approach to decide what to test first. This section will help you build a structured testing program that delivers maximum impact.

A random approach to testing wastes time and resources. Instead, use these frameworks to identify and prioritize the most promising test opportunities:

The PIE Framework

Rate each potential test on a scale of 1-10 for:

  • Potential: How much improvement do you expect?
  • Importance: How valuable is the page or element to your business?
  • Ease: How easy is it to implement the test?

Add the scores and prioritize tests with the highest totals.

The ICE Framework

Similar to PIE, rate tests on:

  • Impact: Potential effect on your main metrics
  • Confidence: How certain are you that this change will improve results?
  • Ease: Resources required to implement

Creating Your Testing Roadmap

Once you’ve prioritized potential tests, create a structured plan:

  1. Develop a testing calendar with specific timeframes
  2. Group related tests into themes (e.g., “Checkout Optimization Month”)
  3. Balance quick wins with more complex, higher-impact tests
  4. Document hypothesis, potential impact, and required resources for each test

For smaller Shopify stores, focus on 1-2 tests per month. Larger stores with more traffic can run multiple tests simultaneously, provided they don’t conflict with each other.

Remember to document your testing plan so you can track progress and learn from both successful and unsuccessful tests over time.

Now that you have a prioritized testing plan, you need to understand how long to run each test for valid results. Let’s explore sample size and test duration considerations…

Sample Size and Test Duration Considerations

One of the biggest testing mistakes is not gathering enough data before making decisions. This section will help you determine exactly how long to run your tests for reliable results.

Running tests with too few visitors can lead to false positives or missed opportunities. Here’s how to determine the right sample size and test duration:

Sample Size Factors

Your required sample size depends on:

  • Baseline conversion rate: Lower conversion rates require larger sample sizes
  • Minimum detectable effect: Smaller improvements require larger sample sizes
  • Statistical confidence level: Higher confidence requires larger sample sizes
  • Statistical power: Typically set at 80%, this is your ability to detect a real effect

As a practical example, if your current conversion rate is 2% and you want to detect a 20% improvement (to 2.4%), you’ll need approximately 25,000 visitors per variation for a 95% confidence level.

Test Duration Guidelines

  • Minimum duration: Even with high traffic, run tests for at least 7 days to capture full weekly cycles
  • Business cycles: Consider extending tests to cover full business cycles (often 2-4 weeks)
  • Seasonal factors: Be cautious with tests that span major holidays or seasonal transitions

When determining if you have enough data, consider these rules of thumb:

  1. At least 100 conversions per variation (minimum)
  2. For more reliable results, aim for 250-400 conversions per variation
  3. For low-traffic sites, consider running tests longer rather than settling for inconclusive results

Remember that statistical power is crucial – this is your test’s ability to detect a real difference when one exists. Underpowered tests frequently lead to false negatives, where you miss real improvements because you didn’t collect enough data.

With your test running for the right duration, you’ll need to properly interpret the results. Let’s explore how to read your test data correctly…

Interpreting A/B Test Results Correctly

Getting meaningful insights from your test results requires more than just looking at which version “won.” This section will help you extract maximum value from your testing data.

When your test has gathered sufficient data, follow these steps to properly interpret the results:

Basic Results Interpretation

  1. Check for statistical significance: Confirm you’ve reached at least 95% confidence
  2. Examine the confidence interval: This shows the range within which the true improvement likely falls
  3. Consider practical significance: A statistically significant result might still be too small to matter for your business
  4. Look at multiple metrics: Did improving one metric negatively impact others?

Beyond Basic Interpretation

For deeper insights:

  • Segment analysis: Different customer groups might respond differently to your variations
    • New vs. returning visitors
    • Traffic sources (social, direct, email)
    • Device types (mobile, desktop, tablet)
    • Geographic regions
  • Look for insights, not just winners: What does the result tell you about your customers’ preferences?
  • Consider follow-up tests: Does this result suggest other elements you should test?

When making implementation decisions, categorize your results as:

  • Clear winner: Statistically significant with meaningful business impact
  • Clear loser: Statistically significant negative result
  • Inconclusive: No statistical significance after adequate sample size

Remember that inconclusive tests still provide valuable information – they tell you that the element you tested might not matter much to your customers.

Even with proper interpretation, there are common pitfalls that can undermine your testing program. Let’s explore these mistakes and how to avoid them…

Common Pitfalls and Mistakes in Shopify A/B Testing

Even experienced store owners fall into these testing traps. Learning to recognize and avoid them will save you time, money, and frustration.

Testing Methodology Mistakes

  • Stopping tests too early: Ending tests as soon as you see significance can lead to false positives
  • Testing too many elements at once: Making multiple changes makes it impossible to know what caused the improvement
  • Ignoring sample size requirements: Making decisions based on too few visitors or conversions
  • Not accounting for external factors: Sales, holidays, or marketing campaigns can skew results

Statistical Interpretation Errors

  • Multiple testing problem: Running many tests increases the chance of false positives
  • Confusing statistical and practical significance: A statistically significant result might be too small to matter
  • Ignoring confidence intervals: The actual improvement might be at the lower end of the range
  • Misinterpreting inconclusive results: “No significant difference” doesn’t mean the test failed

Technical Implementation Issues

  • Sample pollution: The same visitor seeing different variations
  • Flicker effect: Users seeing the original version before the test variant loads
  • Tracking errors: Incorrectly configured analytics or goals
  • Not testing on all devices: Desktop-only tests miss mobile-specific issues

To avoid these pitfalls, develop a disciplined approach to testing with proper planning, adequate duration, and careful analysis. Document your process to maintain consistency across different tests.

For more sophisticated testing needs, there are advanced statistical approaches that can enhance your testing program. Let’s explore some of these concepts…

Advanced Statistical Concepts for Shopify Testing

As your testing program matures, these advanced approaches can help you get more refined insights and faster results. Don’t worry – we’ll keep the explanations straightforward.

Beyond Basic A/B Testing

  • Bayesian vs. Frequentist Testing: Traditional (frequentist) methods look at the probability of observing your data if there was no difference. Bayesian methods directly estimate the probability that one version is better than another, often allowing for faster decisions.
  • Sequential Testing: Rather than waiting for a predetermined sample size, sequential testing continuously evaluates results as data comes in, potentially allowing for earlier stopping while maintaining statistical rigor.
  • Multi-Armed Bandit Testing: This approach automatically shifts more traffic to better-performing variations during the test, maximizing conversions while still gathering data.

Segmentation and Personalization

Advanced testing can help you deliver personalized experiences:

  • Segment-specific testing: Run tests targeting specific customer groups
  • Interaction effects: Discover how multiple variables work together
  • Personalization algorithms: Use test data to build rules for showing different content to different users

Multivariate Testing

While basic A/B testing changes one element, multivariate testing examines interactions between multiple elements:

  • Test combinations of headline, image, and button changes simultaneously
  • Discover which elements have the strongest impact
  • Identify interactions where combinations perform better than individual changes

These advanced methods typically require more traffic and specialized tools, but they can dramatically accelerate your optimization program as your Shopify store grows.

Let’s now look at real-world examples of successful A/B tests that delivered significant improvements for Shopify stores…

Case Studies: Successful Shopify A/B Tests with Statistical Validation

Nothing is more inspiring than seeing real results. These case studies showcase how proper A/B testing with statistical significance led to substantial improvements for Shopify stores.

Product Page Optimization: Fashion Retailer

A clothing store tested product image presentation, comparing their standard gallery with a version that showed models wearing the items in real-life situations:

  • Test duration: 3 weeks
  • Traffic: 45,000 visitors split 50/50
  • Result: 23.5% increase in add-to-cart rate (99% confidence)
  • Business impact: $157,000 additional revenue over 6 months

Checkout Flow Improvement: Home Goods Store

A home décor store tested a streamlined checkout process against their standard 3-step checkout:

  • Test duration: 4 weeks
  • Traffic: 22,000 checkout initiations
  • Result: 15.8% reduction in cart abandonment (97% confidence)
  • Business impact: $32,000 monthly revenue increase

Pricing Display Test: Electronics Retailer

An electronics store tested showing savings as a percentage vs. absolute dollar amount:

  • Test duration: 2 weeks
  • Traffic: 38,000 visitors
  • Result: Dollar amount savings showed 7.2% higher conversion rate (95% confidence)
  • Additional insight: For items over $100, the effect was even stronger (11.3% improvement)

Mobile Optimization: Supplement Store

A nutritional supplement retailer tested a redesigned mobile product page with larger add-to-cart buttons and simplified information presentation:

  • Test duration: 3 weeks
  • Traffic: 51,000 mobile visitors
  • Result: 31.2% increase in mobile conversion rate (99.9% confidence)
  • Business impact: Mobile revenue increased from 30% to 41% of total sales

The key takeaway from these case studies is that seemingly small changes, when properly tested and validated, can deliver substantial business impact. None of these stores would have discovered these improvements through guesswork alone.

For sustainable growth, testing can’t be a one-time project. Let’s explore how to build an ongoing testing culture in your business…

Building a Testing Culture for Shopify Businesses

The most successful Shopify stores don’t just run occasional tests – they embed testing into their operational DNA. This section will show you how to create a sustainable testing program.

Elements of a Strong Testing Culture

  • Hypothesis-driven approach: Every test should start with a clear hypothesis about what you expect to happen and why
  • Learning orientation: Focus on insights, not just “wins”
  • Documentation and knowledge sharing: Build a library of test results and learnings
  • Regular testing rhythms: Establish consistent testing cycles
  • Cross-functional input: Gather test ideas from different team members and departments

Implementing a Testing Program

For small to medium Shopify stores:

  1. Start small: Begin with 1-2 tests per month on high-impact pages
  2. Create a simple test log: Document hypotheses, results, and implementation decisions
  3. Establish a regular review process: Monthly review of test results and planning of new tests
  4. Celebrate insights: Recognize team members who contribute valuable test ideas

For larger Shopify operations:

  1. Dedicated resources: Consider a conversion rate optimization specialist
  2. Testing roadmap: Develop quarterly testing themes and priorities
  3. Testing tools investment: More sophisticated testing platforms may be justified
  4. Cross-channel integration: Align testing with email, advertising, and other marketing efforts

Remember that a true testing culture embraces both successful and unsuccessful tests as learning opportunities. A test that disproves your hypothesis still provides valuable customer insights.

As you develop your testing program, it’s important to consider ethical dimensions of your testing activities. Let’s explore responsible testing practices…

Ethical Considerations in Statistical Testing

Responsible testing isn’t just about getting valid results – it’s also about respecting your customers and maintaining their trust. This section explores the ethical dimensions of your testing program.

Customer Transparency

Consider these approaches to ethical testing:

  • Privacy policy updates: Include information about your testing activities
  • Optional participation: Consider allowing customers to opt out of tests
  • Balance business and customer interests: Tests should aim to improve customer experience, not just extract more money

Testing Boundaries

Some testing practices raise ethical concerns:

  • Price testing: Showing different prices to different customers can damage trust if discovered
  • Artificial scarcity: Falsely claiming limited availability to increase conversions
  • Deceptive urgency: Creating false time pressure to rush purchasing decisions

Data Security

When collecting test data:

  • Comply with regulations: Ensure GDPR, CCPA, or other relevant compliance
  • Minimize personal data: Collect only what’s necessary for your tests
  • Secure storage: Protect test data as you would other customer information

The most sustainable approach is to focus on tests that create win-win scenarios – improving the customer experience while also benefiting your business. This might mean testing clearer product information, more helpful recommendations, or smoother checkout processes rather than manipulative tactics.

Looking ahead, how will A/B testing evolve for Shopify stores? Let’s explore emerging trends and technologies…

Future Trends in Shopify A/B Testing and Statistical Analysis

The world of ecommerce testing is rapidly evolving. This section highlights emerging trends that will shape the future of optimization for Shopify stores.

AI-Powered Testing

Artificial intelligence is transforming testing in several ways:

  • Automated hypothesis generation: AI suggesting what to test based on site analysis
  • Predictive testing: Forecasting test outcomes before full implementation
  • Dynamic allocation: Advanced algorithms that optimize traffic distribution during tests
  • Pattern recognition: Identifying subtle factors that influence conversion across multiple tests

Personalization and Individualization

Testing is moving beyond one-size-fits-all approaches:

  • Segment-specific experiences: Different site versions for different customer types
  • Individual-level optimization: Machine learning models that predict what will convert for each visitor
  • Contextual testing: Experiences that adapt based on time, device, location, and other factors

Expanded Testing Scope

Testing is expanding beyond single pages to more complex scenarios:

  • Journey testing: Optimizing entire customer pathways rather than isolated pages
  • Cross-channel testing: Coordinated testing across website, email, and advertising
  • Integrated online/offline testing: For merchants with both Shopify and physical stores

Advanced Statistical Methods

Statistical approaches are becoming more sophisticated:

  • Causal inference: Better understanding of what truly causes conversion increases
  • Bayesian methods: More intuitive and flexible statistical approaches
  • Machine learning integration: Combining traditional testing with predictive modeling

For Shopify merchants, these trends promise more efficient testing programs with faster, more accurate results and ultimately greater revenue gains from optimization efforts.

Let’s wrap up what we’ve learned and outline some next steps for implementing statistical significance in your Shopify testing program…

Conclusion

Throughout this article, we’ve explored how statistical significance transforms A/B testing from guesswork into a reliable growth strategy for your Shopify store. Let’s recap the key principles:

  • Statistical significance helps you distinguish between real improvements and random chance
  • Properly structured tests with adequate sample sizes are essential for reliable results
  • A systematic approach to test planning, prioritization, and analysis maximizes your return on testing investment
  • Building a testing culture in your business drives continuous improvement and sustainable growth
  • Ethical testing practices build customer trust while improving your business results

For Shopify store owners at different stages:

If you’re just starting with testing:

  1. Focus on high-impact pages like product and checkout pages
  2. Use free tools like Google Optimize to begin your testing program
  3. Commit to running tests for adequate durations, even if that means fewer tests

For intermediate testers:

  1. Develop more structured test planning and prioritization
  2. Begin segmenting your analysis to discover customer-specific insights
  3. Build a knowledge base of test results to inform future optimization

For advanced testing programs:

  1. Consider more sophisticated testing tools and methodologies
  2. Implement personalization based on test learnings
  3. Integrate testing across channels and customer touchpoints

Remember that statistical significance isn’t just a technical requirement – it’s the foundation of a data-driven approach that leads to better customer experiences and stronger business results.

Looking to accelerate your Shopify store’s growth? Growth Suite for Shopify offers integrated A/B testing tools specifically designed to help store owners make data-validated decisions that increase conversions and revenue. Try it today to take your testing program to the next level!

References

Muhammed Tüfekyapan
Muhammed Tüfekyapan

Founder of Growth Suite & The Conversion Bible. Helping Shopify stores to get more revenue with less and fewer discount with Growth Suite Shopify App!

Articles: 72

Leave a Reply

Your email address will not be published. Required fields are marked *