Search Results

Search found 28 results on 2 pages for 'rees'.

Page 1/2 | 1 2  | Next Page >

  • what does "Net user administrator /active:yes" do to a computer?

    - by Rees
    i just purchased a new laptop and had some issues with it... I called tech support and they had me run this command in cmd prompt by right clicking the cmd icon and selecting "run as administrator" with root C:\windows\system32: "Net user administrator /active:no" after it was determined that it didn't fix the issue we ran this code "Net user administrator /active:yes" i then rebooted..and was asked for my windows login for my user account "Rees" as usual.. however ALL my settings where gone (including my desktop files) as though it was the first time I booted up. WHAT in the world happened with this command?? I desperately NEED my user accounts and files back to how they were!! (running windows 7) Please help!

    Read the article

  • what does "Net user administrator /active:yes" do to a computer?

    - by Rees
    i just purchased a new laptop and had some issues with it... I called tech support and they had me run this command in cmd prompt by right clicking the cmd icon and selecting "run as administrator" with root C:\windows\system32: "Net user administrator /active:no" after it was determined that it didn't fix the issue we ran this code "Net user administrator /active:yes" i then rebooted..and was asked for my windows login for my user account "Rees" as usual.. however ALL my settings where gone (including my desktop files) as though it was the first time I booted up. WHAT in the world happened with this command?? I desperately NEED my user accounts and files back to how they were!! (running windows 7) Please help!

    Read the article

  • sending email with PHP (preventing from being placed in spam folder)

    - by Rees
    hello, i am trying to send email using PHP scripts... however, the recipient is receiving it in his/her SPAM folder -this is not the desired result (I would like to have it sent directly to their inbox so that I don't have to warn them to look in their SPAM folder). below is the code I use to send the email using PEAR... what changes can I make to prevent the emails from going into the SPAM folder? $mail = Mail::factory("mail"); $headers["From"] = "[email protected]"; $headers["To"] = "to...rees[email protected]"; $headers["Subject"] = "Activation"; $body = "This is a test!"; $mail->send("[email protected]", $headers, $body); ?

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • Pins on Pinterest link to 404 - yet link is valid

    - by Damian Rees
    this is a really strange one. I've set up a bunch of pins on Pinterest linking through to our services which all work fine. Then I decided to do the same on our blog articles (we use Wordpress), yet everytime I click the link (and I've done this on different computers) the link goes to a 404 page on our site. However the link is valid and if you right click the pin and open in new window it opens fine. I have contacted Pinterest who are next to useless. I have also tried different browsers, different computers and different Pinterest accounts. I can't see any weirdness in my htaccess files causing this so I'm a little stumped. Any suggestions?

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • How Mature is Your Database Change Management Process?

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database Delivery Patterns & Practices Further Reading Organization and team processes How do you get your database schema changes live, on to your production system? As your team of developers and DBAs are working on the changes to the database to support your business-critical applications, how do these updates wend their way through from dev environments, possibly to QA, hopefully through pre-production and eventually to production in a controlled, reliable and repeatable way? In this article, I describe a model we use to try and understand the different stages that customers go through as their database change management processes mature, from the very basic and manual, through to advanced continuous delivery practices. I also provide a simple chart that will help you determine “How mature is our database change management process?” This process of managing changes to the database – which all of us who have worked in application/database development have had to deal with in one form or another – is sometimes known as Database Change Management (even if we’ve never used the term ourselves). And it’s a difficult process, often painfully so. Some developers take the approach of “I’ve no idea how my changes get live – I just write the stored procedures and add columns to the tables. It’s someone else’s problem to get this stuff live. I think we’ve got a DBA somewhere who deals with it – I don’t know, I’ve never met him/her”. I know I used to work that way. I worked that way because I assumed that making the updates to production was a trivial task – how hard can it be? Pause the application for half an hour in the middle of the night, copy over the changes to the app and the database, and switch it back on again? Voila! But somehow it never seemed that easy. And it certainly was never that easy for database changes. Why? Because you can’t just overwrite the old database with the new version. Databases have a state – more specifically 4Tb of critical data built up over the last 12 years of running your business, and if your quick hotfix happened to accidentally delete that 4Tb of data, then you’re “Looking for a new role” pretty quickly after the failed release. There are a lot of other reasons why a managed database change management process is important for organisations, besides job security, not least: Frequency of releases. Many business managers are feeling the pressure to get functionality out to their users sooner, quicker and more reliably. The new book (which I highly recommend) Lean Enterprise by Jez Humble, Barry O’Reilly and Joanne Molesky provides a great discussion on how many enterprises are having to move towards a leaner, more frequent release cycle to maintain their competitive advantage. It’s no longer acceptable to release once per year, leaving your customers waiting all year for changes they desperately need (and expect) Auditing and compliance. SOX, HIPAA and other compliance frameworks have demanded that companies implement proper processes for managing changes to their databases, whether managing schema changes, making sure that the data itself is being looked after correctly or other mechanisms that provide an audit trail of changes. We’ve found, at Red Gate that we have a very wide range of customers using every possible form of database change management imaginable. Everything from “Nothing – I just fix the schema on production from my laptop when things go wrong, and write it down in my notebook” to “A full Continuous Delivery process – any change made by a dev gets checked in and recorded, fully tested (including performance tests) before a (tested) release is made available to our Release Management system, ready for live deployment!”. And everything in between of course. Because of the vast number of customers using so many different approaches we found ourselves struggling to keep on top of what everyone was doing – struggling to identify patterns in customers’ behavior. This is useful for us, because we want to try and fit the products we have to different needs – different products are relevant to different customers and we waste everyone’s time (most notably, our customers’) if we’re suggesting products that aren’t appropriate for them. If someone visited a sports store, looking to embark on a new fitness program, and the store assistant suggested the latest $10,000 multi-gym, complete with multiple weights mechanisms, dumb-bells, pull-up bars and so on, then he’s likely to lose that customer. All he needed was a pair of running shoes! To solve this issue – in an attempt to simplify how we understand our customers and our offerings – we built a model. This is a an attempt at trying to classify our customers in to some sort of model or “Customer Maturity Framework” as we rather grandly term it, which somehow simplifies our understanding of what our customers are doing. The great statistician, George Box (amongst other things, the “Box” in the Box-Jenkins time series model) gave us the famous quote: “Essentially all models are wrong, but some are useful” We’ve taken this quote to heart – we know it’s a gross over-simplification of the real world of how users work with complex legacy and new database developments. Almost nobody precisely fits in to one of our categories. But we hope it’s useful and interesting. There are actually a number of similar models that exist for more general application delivery. We’ve found these from ThoughtWorks/Forrester, from InfoQ and others, and initially we tried just taking these models and replacing the word “application” for “database”. However, we hit a problem. From talking to our customers we know that users are far less further down the road of mature database change management than they are for application development. As a simple example, no application developer, who wants to keep his/her job would develop an application for an organisation without source controlling that code. Sure, he/she might not be using an advanced Gitflow branching methodology but they’ll certainly be making sure their code gets managed in a repo somewhere with all the benefits of history, auditing and so on. But this certainly isn’t the case (yet) for the database – a very large segment of the people we speak to have no source control set up for their databases whatsoever, even at the most basic level (for example, keeping change scripts in a source control system somewhere). By the way, if this is you, Red Gate has a great whitepaper here, on the barriers people face getting a source control process implemented at their organisations. This difference in maturity is the same as you move in to areas such as continuous integration (common amongst app developers, relatively rare for database developers) and automated release management (growing amongst app developers, very rare for the database). So, when we created the model we started from scratch and biased the levels of maturity towards what we actually see amongst our customers. But, what are these stages? And what level are you? The table below describes our definitions for four levels of maturity – Baseline, Beginner, Intermediate and Advanced. As I say, this is a model – you won’t fit any of these categories perfectly, but hopefully one will ring true more than others. We’ve also created a PDF with a flow chart to help you find which of these groups most closely matches your team:  Download the Database Delivery Maturity Framework PDF here   Level D1 – Baseline Work directly on live databases Sometimes work directly in production Generate manual scripts for releases. Sometimes use a product like SQL Compare or similar to do this Any tests that we might have are run manually Level D2 – Beginner Have some ad-hoc DB version control such as manually adding upgrade scripts to a version control system Attempt is made to keep production in sync with development environments There is some documentation and planning of manual deployments Some basic automated DB testing in process Level D3 – Intermediate The database is fully version-controlled with a product like Red Gate SQL Source Control or SSDT Database environments are managed Production environment schema is reproducible from the source control system There are some automated tests Have looked at using migration scripts for difficult database refactoring cases Level D4 – Advanced Using continuous integration for database changes Build, testing and deployment of DB changes carried out through a proper database release process Fully automated tests Production system is monitored for fast feedback to developers   Does this model reflect your team at all? Where are you on this journey? We’d be very interested in knowing how you get on. We’re doing a lot of work at the moment, at Red Gate, trying to help people progress through these stages. For example, if you’re currently not source controlling your database, then this is a natural next step. If you are already source controlling your database, what about the next stage – continuous integration and automated release management? To help understand these issues, there’s a summary of the Red Gate Database Delivery learning program on our site, alongside a Patterns and Practices library here on Simple-Talk and a Training Academy section on our documentation site to help you get up and running with the tools you need to progress. All feedback is welcome and it would be great to hear where you find yourself on this journey! This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Gnome-shell worskpace preview on dual screen

    - by martin
    I'have isntalled gnome3 and I have dual screens(laptop, monitor). Is it possible to see the other monitor in workspace preview in the gnome panel??? I only can see primary monitor, that's why i'm not able to move windows inside the second screen? I've tried to change some settings in dconf , but with no luck. I want to have it like this http://stephen.rees-carter.net/wp-content/uploads/2011/04/Unity-workspace-switcher.png Does anyone have a solution?

    Read the article

  • how to enable remote access to a MySQL server on an AZURE virtual machine

    - by Rees
    I have an AZURE virtual machine with a MySQL server installed on it running ubuntu 13.04. I am trying to remote connect to the MySQL server however get the simple error "Can't connect to MySQL server on {IP}" I have already done the follow: * commented out the bind-address within the /etc/mysql/my.cnf * commented out skip-external-locking within the same my.cnf * "ufw allow mysql" * "iptables -A INPUT -i eth0 -p tcp -m tcp --dport 3306 -j ACCEPT" * setup an AZURE endpoint for mysql * "sudo netstat -lpn | grep 3306" does indeed show mysql LISTENING * "GRANT ALL ON *.* TO remote@'%' IDENTIFIED BY 'password'; * "GRANT ALL ON *.* TO remote@'localhost' IDENTIFIED BY 'password'; * "/etc/init.d/mysql restart" * I can connect via SSH tunneling, but not without it * I have spun up an identical ubuntu 13.04 server on rackspace and SUCCESSFULLY connected using the same procedures outlined here. NONE of the above works on my azure server however. I thought the creation of an endpoint would work, but no luck. Any help please? Is there something I'm missing entirely?

    Read the article

  • ExternalInterface in flex calling javascript function works for mozilla/chrome but NOT IE

    - by Rees
    hello, i have a flex application that does a simple ExternalInterface.call("shareOptions"), which calls a shareOptions() javascript method and works absolutely fine with Mozilla and chrome, however when I test with IE i get the following error: Error: [object Error] at flash.external::ExternalInterface$/_toAS() at flash.external::ExternalInterface$/call() I looked at the adobe livedocs documentation but can't determine what the issue is with IE. is there something i'm missing?? if anyone knows, please let me know ASAP! thanks in advance. private function shareOptions(event:MouseEvent):void{ ExternalInterface.marshallExceptions = true; if (ExternalInterface.available){ ExternalInterface.call("shareOptions"); } } the javascript <script language="JavaScript" type="text/javascript"> function shareOptions() { myWin = window.open('http://www.mysite.shareOptions.php','yeee!','width=640,height=690,toolbar=no,location=0,directories=no,status=no,menubar=no,scrollbars=no,copyhistory=no,resizable=yes,x=500,y=500'); myWin.moveTo(300,300); } </script>

    Read the article

  • flex debugger (how to retrieve a session variable set by a browser)

    - by Rees
    hello, i'm creating a flex application and trying to debug using the "Network Monitor" view. The script i'm debugging fetches a PHP session variable (the PHP outputs xml) and the actionscript retrieves the value from the HTTPService event. if I am using say a chrome browser, i can correctly retrieve the session variable ANY TIME. if I switch to say a firefox browser, then clearly the chrome session variable is unavailable to firefox. My issue is that I create the session variable with say chrome, and then try to retrieve my session variable from my FLEX application debugger (which always returns null) -when I really want it to return my session variable. is there a way for my flex debugger to retrieve this session variable set by chrome (or any browser)? (I'm even using chrome as my debugging browser for flex)

    Read the article

  • facebook connect api "Cannot use string offset as an array in" error

    - by Rees
    Please help! I have been grappling with this error for days and I cannot for the life of me figure it out. I am using facebook connect and fetching a "contact_email" attribute using their api method users_getInfo. The issue is that when I execute this PHP file, i get this error: "Cannot use string offset as an array in...". This error specifically refers to this line of code: $firstName=$user_details[0]['contact_email']; I'm thinking this is because the user_getInfo method is not returning any results... However, the most ridiculous part about all this is that, I can execute the code below several dozens of times in a row SUCCESSFULLY without the above error, BUT THEN randomly without changing ANY code at all, I will suddenly encounter this error, in which case it will begin to give me an error several dozens of times, and then AGAIN without any code change, start executing successfully again. This odd behavior occurs regardless of the attribute i am fetching.. (contact_email, first_name, last_name, etc.). I am running php 5.2.11. Is there something I'm missing?? Please Help! include_once 'site/fbconnect/config.php'; //has $api_key and $secret defined. include_once 'site/facebook-platform/client/facebook.php'; global $api_key,$secret; $fb=new Facebook($api_key,$secret); $fb-require_login(); $fb_user=$fb-get_loggedin_user(); $user_details=$fb-api_client-users_getInfo($fb_user,array('last_name','first_name','contact_email')); $email=$user_details[0]['contact_email']; $firstName=$user_details[0]['first_name']; $lastName=$user_details[0]['last_name'];

    Read the article

  • validate a .edu or .ac email address

    - by Rees
    Hello, I'm a noob but trying vigorously to simply validate email addresses that only end in ".edu" or ".ac" is there a simple function/script/solution to this seemingly simple problem? able to use php,javascript or jquery. Any help would be great thanks in advance!

    Read the article

  • ArrayCollection error in Flex does not accept single XML nodes - alternatives?

    - by Rees
    hello, i get this error when i retrieve an XML that only has 1 node (no repeating nodes) and i try to store in an ArrayCollection. -When I have MORE than 1 "name" nodes...i do NOT get an error. TypeError: Error #1034: Type Coercion failed: cannot convert "XXXXXX" to mx.collections.ArrayCollection. this error occurs as the line of code: myList= e.result.list.name; Why can't ArrayCollection work with a single node? I'm using this ArrayCollection as a dataprovider for a Component -is there an alternative I can use that will take BOTH single and repeating nodes as well as work as a dataprovider? Thanks in advance! code: [Bindable] private var myList:ArrayCollection= new ArrayCollection(); private function getList(e:Event):void{ var getStudyLoungesService:HTTPService = new HTTPService(); getStuffService.url = "website.com/asdf.php"; getStuffService.addEventListener(ResultEvent.RESULT, onGetList); getStuffService.send(); } private function onGetList(e:ResultEvent):void{ myList= e.result.list.name; }

    Read the article

  • tool to remove repeated css selectors

    - by Rees
    I have 3 stylesheets that I have copied and pasted all into 1. As such, there are identical/repeated selectors. Is there an effective tool I can use to remove all repeated selectors if the style properties are identical? Another requirement is if the repeated selector exists once within a group of selectors and once as a standalone selector... the program will remove the selector AND properties of the standalone only (not delete the properties of the selector in a group). I have no clue if this is possible, but have just spent a few hours looking for one with no avail. Anyone know of anything close? Thanks in advance!

    Read the article

  • flex transition effects works on 2nd and after transition, but not on very first transition

    - by Rees
    i have a flex app that transitions between 2 states with the toggle of a button. my issue is that the effect of fading only seems to work on the 2nd transition and after. However, for my first transition... going from State1 to studyState... there is no fade effect whatsoever, in fact the components in state1 disappear completely (the footer fills the empty gap where the "body" use to be) and then the flex recreates the studyState (without any fade refilling the "body" with components only in studyState). After this first transition however, going between studyState and State1 working COMPLETELY fine.. why does this happen and how can i make it so that crossfade works STARTING FROM THE VERY FIRST TRANSITION? please help! <s:VGroup id="globalGroup" includeIn="State1" width="100%"></Vgroup> <s:VGroup id="studyGroup" includeIn="studyState" width="100%"></Vgroup>

    Read the article

  • possible to make text messaging with php have a constant "telephone number" value?

    - by Rees
    hello, i have an iphone 3g and can successfully send text messages using the PHP mail() function. My issue is that for each message i receive, the "telephone number" associated with the incoming text message changes each time. If possible, I would like to somehow make this number constant so that I can take advantage of iphone's ability to aggregate all text messages from the same telephone number -Otherwise my iphone would be cluttered with messages. Is there a way to do this? an example of the numbers I receive would be 1(410) 000-001, 1(410) 000-002, 1(410) 000-003, etc... can i make this constant somehow? $message = stripslashes("new user just joined!"); mail("[email protected]", "Subject", "$message"); please let me know! thanks...

    Read the article

  • Inherit a parent class docstring as __doc__ attribute

    - by Reinout van Rees
    There is a question about Inherit docstrings in Python class inheritance, but the answers there deal with method docstrings. My question is how to inherit a docstring of a parent class as the __doc__ attribute. The usecase is that Django rest framework generates nice documentation in the html version of your API based on your view classes' docstrings. But when inheriting a base class (with a docstring) in a class without a docstring, the API doesn't show the docstring. It might very well be that sphinx and other tools do the right thing and handle the docstring inheritance for me, but django rest framework looks at the (empty) .__doc__ attribute. class ParentWithDocstring(object): """Parent docstring""" pass class SubClassWithoutDoctring(ParentWithDocstring): pass parent = ParentWithDocstring() print parent.__doc__ # Prints "Parent docstring". subclass = SubClassWithoutDoctring() print subclass.__doc__ # Prints "None" I've tried something like super(SubClassWithoutDocstring, self).__doc__, but that also only got me a None.

    Read the article

  • database security with php page that spits out XML

    - by Rees
    Hello, I just created a PHP page that spits outs some data from my database in an XML format. This data is fetched from a flex application I made. I had spent a long time formatting my tables and database information and do not want anyone to be able to simply type www.mysite.com/page_that_spits_out_XML.php and steal my data. However, at the same time I need to be able to access this page from my flex application. Is there a way I can prevent other people from doing this? Thank you!

    Read the article

  • flex actionscript not uploading file to PHP page HELP!

    - by Rees
    hello, please help! I am using actionscript 3 with flex sdk 3.5 and PHP to allow a user to upload a file -that is my goal. However, when I check my server directory for the file... NOTHING is there! For some reason SOMETHING is going wrong, even though the actionscript alerts a successful upload (and I have even tried all the event listeners for uploading errors and none are triggered). I have also tested the PHP script and it uploads SUCCESSFULLY when receiving a file from another PHP page (so i'm left to believe there is nothing wrong with my PHP). However, actionscript is NOT giving me any errors when I upload -in fact it gives me a successful event...and I know the my flex application is actually trying to send the data because when I attempt to upload a large file, it takes significantly more time to alert a "successful" event than when I upload a small file. I feel I have debugged every aspect of this code and am now spent. pleaseeee, anyone, can you tell me whats going wrong?? or at least how I can find out whats happening? -I'm using flash bugger and I'm still getting zero errors. private var fileRef:FileReference = new FileReference(); private var flyerrequest:URLRequest = new URLRequest("http://mysite.com/sub/upload_file.php"); private function uploadFile():void{ fileRef.browse(); fileRef.addEventListener(Event.SELECT, selectHandler); fileRef.addEventListener(Event.COMPLETE, completeHandler); } private function selectHandler(event:Event):void{ fileRef.upload(flyerrequest); } private function completeHandler(event:Event):void{ Alert.show("uploaded"); }

    Read the article

  • what dataprovider in flex (aside from ArrayCollection) will work for single, non-repeating XML nodes

    - by Rees
    hello, i asked this question before but haven't been able to get an answer.. i get the following error when i retrieve an XML that only has 1 node (no repeating nodes) from a PHP page and i try to store in an ArrayCollection. -When I have MORE than 1 "name" nodes...i do NOT get an error. TypeError: Error #1034: Type Coercion failed: cannot convert "XXXXXX" to mx.collections.ArrayCollection. this error occurs as the line of code: myList= e.result.list.name; I'm using this ArrayCollection as a dataprovider for a Component -is there an alternative I can use that will take BOTH single and repeating nodes as well as work as a dataprovider? Thanks in advance! code: [Bindable] private var myList:ArrayCollection= new ArrayCollection(); private function getList(e:Event):void{ var getStudyLoungesService:HTTPService = new HTTPService(); getStuffService.url = "website.com/asdf.php"; getStuffService.addEventListener(ResultEvent.RESULT, onGetList); getStuffService.send(); } private function onGetList(e:ResultEvent):void{ myList= e.result.list.name; }

    Read the article

  • how to prevent white spaces in a regular expression regex validation

    - by Rees
    i am completely new to regular expressions and am trying to create a regular expression in flex for a validation. using a regular expression, i am going to validate that the user input does NOT contain any white-space and consists of only characters and digits... starting with digit. so far i have: expression="[A-Za-z][A-Za-z0-9]*" this correctly checks for user input to start with a character followed by a possible digit, but this does not check if there is white space...(in my tests if user input has a space this input will pass through validation - this is not desired) can someone tell me how i can modify this expression to ensure that user input with whitespace is flagged as invalid?

    Read the article

  • possible to create sql query with table wildcards?

    - by Rees
    this is probably a simple question, but i have been unable to find a solution online, any help would be much appreciated. I'm trying to create an SQL query in PHP and would like to somehow apply a wild card to the TABLE filter... something perhaps like.... select * from %table%. However, I have only so far been able to see filters for column values not table names. as an example i would have tables such as: jan_table_1 feb_table_1 jan_table_2 feb_table_2 and would want to say, select only tables with a "jan" prefix... or "1" suffix. Is there a quick and easy solution to this that I have not seen? Thanks in advance!

    Read the article

  • how to change font color and size within single label component in flex

    - by Rees
    i'm trying to create a unordered list in Flex. My issue is that within each line, i want the word NEW to be a different font color and different font size from the rest of the label text. I am unsure of how to do this INLINE within the label component. Any thoughts anyone? <s:VGroup fontSize="15" color="#ffffff"> <s:Label text="\u2022 NEW Invite your friends!" /> <s:Label text="\u2022 NEW Features coming soon!" /> <s:Label text="\u2022 NEW Invite your friends!" /> </s:VGroup>

    Read the article

  • flex data provider not working if XML has single node value or less

    - by Rees
    hello, i get this error when i retrieve an XML that only has 1 node (no repeating nodes) and i try to store in an ArrayCollection. -When I have MORE than 1 "name" nodes...i do NOT get an error. My test show that XMLListCollection does NOT work either. TypeError: Error #1034: Type Coercion failed: cannot convert "XXXXXX" to mx.collections.ArrayCollection. this error occurs as the line of code: myList= e.result.list.name; Why can't ArrayCollection work with a single node? I'm using this ArrayCollection as a dataprovider for a Component -is there an alternative I can use that will take BOTH single and repeating nodes as well as work as a dataprovider? Thanks in advance! code: [Bindable] private var myList:ArrayCollection= new ArrayCollection(); private function getList(e:Event):void{ var getStudyLoungesService:HTTPService = new HTTPService(); getStuffService.url = "website.com/asdf.php"; getStuffService.addEventListener(ResultEvent.RESULT, onGetList); getStuffService.send(); } private function onGetList(e:ResultEvent):void{ myList= e.result.list.name; }

    Read the article

1 2  | Next Page >