Search Results

Search found 19923 results on 797 pages for 'instance variables'.

Page 110/797 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • Let a model instance choose appropriate view class using category. Is it good design?

    - by Denis Mikhaylov
    Assume I have abstract base model class called MoneySource. And two realizations BankCard and CellularAccount. In MoneysSourceListViewController I want to display a list of them, but with ListItemView different for each MoneySource subclass. What if I define a category on MoneySource @interface MoneySource (ListItemView) - (Class)listItemViewClass; @end And then override it for each concrete sublcass of MoneySource, returning suitable view class. @implementation CellularAccount (ListItemView) - (Class)listItemViewClass { return [BankCardListView class]; } @end @implementation BankCard (ListItemView) - (Class)listItemViewClass { return [CellularAccountListView class]; } @end so I can ask model object about its view, not violating MVC principles, and avoiding class introspection or if constructions. Thank you!

    Read the article

  • If all variables are a subset of the superkey, is the database design 5NF? [migrated]

    - by Lukazoid
    I have a table called LogMessages, which has the following columns: Level A numeric value which represents Trace, Debug, Info, Warning, Error or Fatal Time A UTC time Message Foreign key to a Messages table Source Foreign key to a Sources table User Foreign key to a Users table From what I can see, all of these columns are a part of the super key; if any single value differs to an existing row, a new row can be created. My question is, does this design comply to fifth normal form? I am unsure as some groups of data will be repeating, however I don't believe this violates 5NF? (correct me if I'm wrong)

    Read the article

  • Locale variables have no effect in remote shell (perl: warning: Setting locale failed.)

    - by Janning
    I have a fresh ubuntu 12.04 installation. When i connect to my remote server i got errors like this: ~$ ssh example.com sudo aptitude upgrade ... Traceback (most recent call last): File "/usr/bin/apt-listchanges", line 33, in <module> from ALChacks import * File "/usr/share/apt-listchanges/ALChacks.py", line 32, in <module> sys.stderr.write(_("Can't set locale; make sure $LC_* and $LANG are correct!\n")) NameError: name '_' is not defined perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_TIME = "de_DE.UTF-8", LC_MONETARY = "de_DE.UTF-8", LC_ADDRESS = "de_DE.UTF-8", LC_TELEPHONE = "de_DE.UTF-8", LC_NAME = "de_DE.UTF-8", LC_MEASUREMENT = "de_DE.UTF-8", LC_IDENTIFICATION = "de_DE.UTF-8", LC_NUMERIC = "de_DE.UTF-8", LC_PAPER = "de_DE.UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_ALL to default locale: No such file or directory No packages will be installed, upgraded, or removed. 0 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B of archives. After unpacking 0 B will be used. ... I don't have this problem when i connect from an older ubuntu installation. This is output from my ubuntu 12.04 installation, LANG and LANGUAGE are set $ locale LANG=de_DE.UTF-8 LANGUAGE=de_DE:en_GB:en LC_CTYPE="de_DE.UTF-8" LC_NUMERIC=de_DE.UTF-8 LC_TIME=de_DE.UTF-8 LC_COLLATE="de_DE.UTF-8" LC_MONETARY=de_DE.UTF-8 LC_MESSAGES="de_DE.UTF-8" LC_PAPER=de_DE.UTF-8 LC_NAME=de_DE.UTF-8 LC_ADDRESS=de_DE.UTF-8 LC_TELEPHONE=de_DE.UTF-8 LC_MEASUREMENT=de_DE.UTF-8 LC_IDENTIFICATION=de_DE.UTF-8 LC_ALL= Does anybody know what has changed in ubuntu to get this error message on remote servers?

    Read the article

  • Déploiement et configuration réseau (firewall) d'une instance CloudStack avec Apache préinstallé, troisième tutoriel d'une série sur Cloudstack

    Bonjour, Citation: CloudStack est un logiciel de cloud computing open source pour la création, la gestion et le déploiement de services de cloud d'infrastructure. Il utilise des hyperviseurs existants tels que KVM, vSphere, XenServer et / XCP pour la virtualisation. En plus de sa propre API, CloudStack prend également en charge les Amazon Web Services. Voici une série de tests effectués par Ikoula sur ce logiciel. Je vous présente ce troisième tutoriel sur Cloudstack:Déploiement...

    Read the article

  • Déployer une instance Debian 7 avec un WordPress prêt à l'emploi en quelques secondes avec CloudStack, quatrième tutoriel d'une série sur Cloudstack

    Bonjour, Citation: CloudStack est un logiciel de cloud computing open source pour la création, la gestion et le déploiement de services de cloud d'infrastructure. Il utilise des hyperviseurs existants tels que KVM, vSphere, XenServer et / XCP pour la virtualisation. En plus de sa propre API, CloudStack prend également en charge les Amazon Web Services. Voici une série de tests effectués par Ikoula sur ce logiciel. Je vous présente ce quatrième tutoriel sur Cloudstack:Déployer...

    Read the article

  • SQL Server 2008 System Functions to Monitor the Instance, Database, Files, etc.

    SQL Server provides several system meta data functions which allow users to obtain property values of different SQL Server objects and securables. Although you can also use the SQL Server catalog views or Dynamic Management Views to obtain much of this information, in some circumstances the system meta data functions simplify the process. In this tip I am going to demonstrate some of the available system meta data functions and their usage in different scenarios.

    Read the article

  • CMake : système de compilation sort en version 3.0, nouveaux générateurs, variables, propriétés et meilleure gestion de la compilation croisée

    CMake 3 est maintenant disponible ! Découvrez les nouveautés du système de compilation multiplateforme Nouvelles pages de manuel, dont une pour Qt, nouveaux générateurs et de multiples autres apports CMake est un système de compilation et de construction de projets multiplateforme et Open Source. À l'aide d'un simple fichier CMakeLists.txt décrivant votre projet, CMake sera capable de le générer des fichiers pour votre EDI préférés. En résumé, il configure votre projet Visual Studio, Code::Blocks,...

    Read the article

  • TypeError: unbound method make_request() must be called with XX instance, but how?

    - by Dave
    Running the code below I get E TypeError: unbound method make_request() must be called with A instance as first argument (got str instance instead) I dont want to set make_request method as static, I want to call it from an instance of an object. The example http://pytest.org/latest/fixture.html#fixture-function # content of ./test_smtpsimple.py import pytest @pytest.fixture def smtp(): import smtplib return smtplib.SMTP("merlinux.eu") def test_ehlo(smtp): response, msg = smtp.ehlo() assert response == 250 assert "merlinux" in msg assert 0 # for demo purposes My code """ """ import pytest class A(object): """ """ def __init__(self, name ): """ """ self._prop1 = [name] @property def prop1(self): return self._prop1 @prop1.setter def prop1(self, arguments): self._prop1 = arguments def make_request(self, sex): return 'result' def __call__(self): return self @pytest.fixture() def myfixture(): """ """ A('BigDave') return A def test_validateA(myfixture): result = myfixture.make_request('male') assert result =='result'

    Read the article

  • In PHP, is it possible to create an instance of an class without calling class's constructor ?

    - by Rachel
    By any means, is it possible to create an instance of an php class without calling its constructor ? I have Class A and while creating an instance of it am passing file and in constructor of Class A am opening the file. Now in Class A, there is function which I need to call but am not required to pass file and so there is not need to use constructor functionality of opening file as am not passing file. So my question is, Is it possible by any means to create an instance of an PHP class without calling its constructor ?

    Read the article

  • How can I create a new class instance from a class within a (static) class?

    - by Mervin
    I'm new to Java (have experience with C#), This is what i want to do: public final class MyClass { public class MyRelatedClass { ... } } public class OtherRandomClass { public void DoStuff() { MyRelatedClass data = new MyClass.MyRelatedClass(); } } which gives this error in Eclipse: "No enclosing instance of type BitmapEffects is accessible. Must qualify the allocation with an enclosing instance of type BitmapEffects (e.g. x.new A() where x is an instance of BitmapEffects)." this is possible in C# with static classes , how should it be done here?

    Read the article

  • How do I dynamically update an instance array to hold a list of dynamic methods on instantiation?

    - by Will
    I am trying to dynamically define methods based on xml mappings. This works really well. However I want to create an instance variable that is a array of the dynamically defined methods. My code looks something like this def xml_attr_reader(*args) xml_list = "" args.each do |arg| string_val = "def #{arg}; " + " xml_mapping.#{arg}; " + "end; " self.class_eval string_val xml_hash = xml_list + "'#{arg}'," end self.class_eval "@xml_attributes = [] if @xml_attributes.nil?;" + "@xml_attributes = @xml_attributes + [#{xml_list}];" + "puts 'xml_attrs = ' + @xml_attributes.to_s;" + "def xml_attributes;" + " puts 'xml_attrs = ' + @xml_attributes.to_s;" + " @xml_attributes;" + "end" end So everything works except when I call xml_attributes on an instance it return null (and prints out 'xml_attrs = '). While the puts before the definition actually prints out the correct array. (when I instantiate the instance)

    Read the article

  • Given an instance of a Ruby object, how do I get its metaclass?

    - by Stanislaus Wernstrom
    Normally, I might get the metaclass for a particular instance of a Ruby object with something like this: class C def metaclass class << self; self; end end end # This is this instance's metaclass. C.new.metaclass => #<Class:#<C:0x01234567>> # Successive invocations will have different metaclasses, # since they're different instances. C.new.metaclass => #<Class:#<C:0x01233...>> C.new.metaclass => #<Class:#<C:0x01232...>> C.new.metaclass => #<Class:#<C:0x01231...>> Let's say I just want to know the metaclass of an arbitrary object instance obj of an arbitrary class, and I don't want to define a metaclass (or similar) method on the class of obj. Is there a way to do that?

    Read the article

  • What happens to an instance of ServerSocket blocked inside accept(), when I drop all references to i

    - by Hanno Fietz
    In a multithreaded Java application, I just tracked down a strange-looking bug, realizing that what seemed to be happening was this: one of my objects was storing a reference to an instance of ServerSocket on startup, one thread would, in its main loop in run(), call accept() on the socket while the socket was still waiting for a connection, another thread would try to restart the component under some conditions, the restart process missed the cleanup sequence before it reached the initialization sequence as a result, the reference to the socket was overwritten with a new instance, which then wasn't able to bind() anymore the socket which was blocking inside the accept() wasn't accessible anymore, leaving a complete shutdown and restart of the application as the only way to get rid of it. Which leaves me wondering: with no references left to the ServerSocket instance, what would free the socket for a new connection? At what point would the ServerSocket become garbage collected? In general, what are good practices I can follow to avoid this type of bug?

    Read the article

  • Can I use a single instance of a delegate to start multiple Asynchronous Requests?

    - by RobV
    Just wondered if someone could clarify the use of BeginInvoke on an instance of some delegate when you want to make multiple asynchronous calls since the MSDN documentation doesn't really cover/mention this at all. What I want to do is something like the following: MyDelegate d = new MyDelegate(this.TargetMethod); List<IAsyncResult> results = new List<IAsyncResult>(); //Start multiple asynchronous calls for (int i = 0; i < 4; i++) { results.Add(d.BeginInvoke(someParams, null, null)); } //Wait for all my calls to finish WaitHandle.WaitAll(results.Select(r => r.AsyncWaitHandle).ToArray()); //Process the Results The question is can I do this with one instance of the delegate or do I need an instance of the delegate for each individual call? Given that EndInvoke() takes an IAsyncResult as a parameter I would assume that the former is correct but I can't see anything in the documentation to indicate either way.

    Read the article

  • Can I un-assign (clear) all fields of an instance?

    - by Roman
    Is there a simple way to clear all fields of an instance from a an instance? I mean, I would like to remove all values assigned to the fields of an instance. ADDED From the main thread I start a window and another thread which controls state of the window (the last thread, for example, display certain panels for a certain period of time). I have a class which contains state of the window (on which stage the user is, which buttons he already clicked). In the end, user may want to start the whole process from the beginning (it is a game). So, I decided. So, if everything is executed from the beginning, I would like to have all parameter to be clean (fresh, unassigned). ADDED The main thread, creates the new object which is executed in a new thread (and the old thread is finished). So, I cannot create a new object from the old thread. I just have a loop in the second thread.

    Read the article

  • The Execute SQL Task

    In this article we are going to take you through the Execute SQL Task in SQL Server Integration Services for SQL Server 2005 (although it appies just as well to SQL Server 2008).  We will be covering all the essentials that you will need to know to effectively use this task and make it as flexible as possible. The things we will be looking at are as follows: A tour of the Task. The properties of the Task. After looking at these introductory topics we will then get into some examples. The examples will show different types of usage for the task: Returning a single value from a SQL query with two input parameters. Returning a rowset from a SQL query. Executing a stored procedure and retrieveing a rowset, a return value, an output parameter value and passing in an input parameter. Passing in the SQL Statement from a variable. Passing in the SQL Statement from a file. Tour Of The Task Before we can start to use the Execute SQL Task in our packages we are going to need to locate it in the toolbox. Let's do that now. Whilst in the Control Flow section of the package expand your toolbox and locate the Execute SQL Task. Below is how we found ours. Now drag the task onto the designer. As you can see from the following image we have a validation error appear telling us that no connection manager has been assigned to the task. This can be easily remedied by creating a connection manager. There are certain types of connection manager that are compatable with this task so we cannot just create any connection manager and these are detailed in a few graphics time. Double click on the task itself to take a look at the custom user interface provided to us for this task. The task will open on the general tab as shown below. Take a bit of time to have a look around here as throughout this article we will be revisting this page many times. Whilst on the general tab, drop down the combobox next to the ConnectionType property. In here you will see the types of connection manager which this task will accept. As with SQL Server 2000 DTS, SSIS allows you to output values from this task in a number of formats. Have a look at the combobox next to the Resultset property. The major difference here is the ability to output into XML. If you drop down the combobox next to the SQLSourceType property you will see the ways in which you can pass a SQL Statement into the task itself. We will have examples of each of these later on but certainly when we saw these for the first time we were very excited. Next to the SQLStatement property if you click in the empty box next to it you will see ellipses appear. Click on them and you will see the very basic query editor that becomes available to you. Alternatively after you have specified a connection manager for the task you can click on the Build Query button to bring up a completely different query editor. This is slightly inconsistent. Once you've finished looking around the general tab, move on to the next tab which is the parameter mapping tab. We shall, again, be visiting this tab throughout the article but to give you an initial heads up this is where you define the input, output and return values from your task. Note this is not where you specify the resultset. If however you now move on to the ResultSet tab this is where you define what variable will receive the output from your SQL Statement in whatever form that is. Property Expressions are one of the most amazing things to happen in SSIS and they will not be covered here as they deserve a whole article to themselves. Watch out for this as their usefulness will astound you. For a more detailed discussion of what should be the parameter markers in the SQL Statements on the General tab and how to map them to variables on the Parameter Mapping tab see Working with Parameters and Return Codes in the Execute SQL Task. Task Properties There are two places where you can specify the properties for your task. One is in the task UI itself and the other is in the property pane which will appear if you right click on your task and select Properties from the context menu. We will be doing plenty of property setting in the UI later so let's take a moment to have a look at the property pane. Below is a graphic showing our properties pane. Now we shall take you through all the properties and tell you exactly what they mean. A lot of these properties you will see across all tasks as well as the package because of everything's base structure The Container. BypassPrepare Should the statement be prepared before sending to the connection manager destination (True/False) Connection This is simply the name of the connection manager that the task will use. We can get this from the connection manager tray at the bottom of the package. DelayValidation Really interesting property and it tells the task to not validate until it actually executes. A usage for this may be that you are operating on table yet to be created but at runtime you know the table will be there. Description Very simply the description of your Task. Disable Should the task be enabled or not? You can also set this through a context menu by right clicking on the task itself. DisableEventHandlers As a result of events that happen in the task, should the event handlers for the container fire? ExecValueVariable The variable assigned here will get or set the execution value of the task. Expressions Expressions as we mentioned earlier are a really powerful tool in SSIS and this graphic below shows us a small peek of what you can do. We select a property on the left and assign an expression to the value of that property on the right causing the value to be dynamically changed at runtime. One of the most obvious uses of this is that the property value can be built dynamically from within the package allowing you a great deal of flexibility FailPackageOnFailure If this task fails does the package? FailParentOnFailure If this task fails does the parent container? A task can he hosted inside another container i.e. the For Each Loop Container and this would then be the parent. ForcedExecutionValue This property allows you to hard code an execution value for the task. ForcedExecutionValueType What is the datatype of the ForcedExecutionValue? ForceExecutionResult Force the task to return a certain execution result. This could then be used by the workflow constraints. Possible values are None, Success, Failure and Completion. ForceExecutionValue Should we force the execution result? IsolationLevel This is the transaction isolation level of the task. IsStoredProcedure Certain optimisations are made by the task if it knows that the query is a Stored Procedure invocation. The docs say this will always be false unless the connection is an ADO connection. LocaleID Gets or sets the LocaleID of the container. LoggingMode Should we log for this container and what settings should we use? The value choices are UseParentSetting, Enabled and Disabled. MaximumErrorCount How many times can the task fail before we call it a day? Name Very simply the name of the task. ResultSetType How do you want the results of your query returned? The choices are ResultSetType_None, ResultSetType_SingleRow, ResultSetType_Rowset and ResultSetType_XML. SqlStatementSource Your Query/SQL Statement. SqlStatementSourceType The method of specifying the query. Your choices here are DirectInput, FileConnection and Variables TimeOut How long should the task wait to receive results? TransactionOption How should the task handle being asked to join a transaction? Usage Examples As we move through the examples we will only cover in them what we think you must know and what we think you should see. This means that some of the more elementary steps like setting up variables will be covered in the early examples but skipped and simply referred to in later ones. All these examples used the AventureWorks database that comes with SQL Server 2005. Returning a Single Value, Passing in Two Input Parameters So the first thing we are going to do is add some variables to our package. The graphic below shows us those variables having been defined. Here the CountOfEmployees variable will be used as the output from the query and EndDate and StartDate will be used as input parameters. As you can see all these variables have been scoped to the package. Scoping allows us to have domains for variables. Each container has a scope and remember a package is a container as well. Variable values of the parent container can be seen in child containers but cannot be passed back up to the parent from a child. Our following graphic has had a number of changes made. The first of those changes is that we have created and assigned an OLEDB connection manager to this Task ExecuteSQL Task Connection. The next thing is we have made sure that the SQLSourceType property is set to Direct Input as we will be writing in our statement ourselves. We have also specified that only a single row will be returned from this query. The expressions we typed in was: SELECT COUNT(*) AS CountOfEmployees FROM HumanResources.Employee WHERE (HireDate BETWEEN ? AND ?) Moving on now to the Parameter Mapping tab this is where we are going to tell the task about our input paramaters. We Add them to the window specifying their direction and datatype. A quick word here about the structure of the variable name. As you can see SSIS has preceeded the variable with the word user. This is a default namespace for variables but you can create your own. When defining your variables if you look at the variables window title bar you will see some icons. If you hover over the last one on the right you will see it says "Choose Variable Columns". If you click the button you will see a list of checkbox options and one of them is namespace. after checking this you will see now where you can define your own namespace. The next tab, result set, is where we need to get back the value(s) returned from our statement and assign to a variable which in our case is CountOfEmployees so we can use it later perhaps. Because we are only returning a single value then if you remember from earlier we are allowed to assign a name to the resultset but it must be the name of the column (or alias) from the query. A really cool feature of Business Intelligence Studio being hosted by Visual Studio is that we get breakpoint support for free. In our package we set a Breakpoint so we can break the package and have a look in a watch window at the variable values as they appear to our task and what the variable value of our resultset is after the task has done the assignment. Here's that window now. As you can see the count of employess that matched the data range was 2. Returning a Rowset In this example we are going to return a resultset back to a variable after the task has executed not just a single row single value. There are no input parameters required so the variables window is nice and straight forward. One variable of type object. Here is the statement that will form the soure for our Resultset. select p.ProductNumber, p.name, pc.Name as ProductCategoryNameFROM Production.ProductCategory pcJOIN Production.ProductSubCategory pscON pc.ProductCategoryID = psc.ProductCategoryIDJOIN Production.Product pON psc.ProductSubCategoryID = p.ProductSubCategoryID We need to make sure that we have selected Full result set as the ResultSet as shown below on the task's General tab. Because there are no input parameters we can skip the parameter mapping tab and move straight to the Result Set tab. Here we need to Add our variable defined earlier and map it to the result name of 0 (remember we covered this earlier) Once we run the task we can again set a breakpoint and have a look at the values coming back from the task. In the following graphic you can see the result set returned to us as a COM object. We can do some pretty interesting things with this COM object and in later articles that is exactly what we shall be doing. Return Values, Input/Output Parameters and Returning a Rowset from a Stored Procedure This example is pretty much going to give us a taste of everything. We have already covered in the previous example how to specify the ResultSet to be a Full result set so we will not cover it again here. For this example we are going to need 4 variables. One for the return value, one for the input parameter, one for the output parameter and one for the result set. Here is the statement we want to execute. Note how much cleaner it is than if you wanted to do it using the current version of DTS. In the Parameter Mapping tab we are going to Add our variables and specify their direction and datatypes. In the Result Set tab we can now map our final variable to the rowset returned from the stored procedure. It really is as simple as that and we were amazed at how much easier it is than in DTS 2000. Passing in the SQL Statement from a Variable SSIS as we have mentioned is hugely more flexible than its predecessor and one of the things you will notice when moving around the tasks and the adapters is that a lot of them accept a variable as an input for something they need. The ExecuteSQL task is no different. It will allow us to pass in a string variable as the SQL Statement. This variable value could have been set earlier on from inside the package or it could have been populated from outside using a configuration. The ResultSet property is set to single row and we'll show you why in a second when we look at the variables. Note also the SQLSourceType property. Here's the General Tab again. Looking at the variable we have in this package you can see we have only two. One for the return value from the statement and one which is obviously for the statement itself. Again we need to map the Result name to our variable and this can be a named Result Name (The column name or alias returned by the query) and not 0. The expected result into our variable should be the amount of rows in the Person.Contact table and if we look in the watch window we see that it is.   Passing in the SQL Statement from a File The final example we are going to show is a really interesting one. We are going to pass in the SQL statement to the task by using a file connection manager. The file itself contains the statement to run. The first thing we are going to need to do is create our file connection mananger to point to our file. Click in the connections tray at the bottom of the designer, right click and choose "New File Connection" As you can see in the graphic below we have chosen to use an existing file and have passed in the name as well. Have a look around at the other "Usage Type" values available whilst you are here. Having set that up we can now see in the connection manager tray our file connection manager sitting alongside our OLE-DB connection we have been using for the rest of these examples. Now we can go back to the familiar General Tab to set up how the task will accept our file connection as the source. All the other properties in this task are set up exactly as we have been doing for other examples depending on the options chosen so we will not cover them again here.   We hope you will agree that the Execute SQL Task has changed considerably in this release from its DTS predecessor. It has a lot of options available but once you have configured it a few times you get to learn what needs to go where. We hope you have found this article useful.

    Read the article

  • EC2: is an instance's public DNS stable? Can I rely on it not changing?

    - by Aseem Kishore
    I'm new to Amazon EC2. I've launched my first instance, and am using it as a web server. I see that it has a public DNS (a public URL), e.g.: ec2-123-45-6-789.compute-1.amazonaws.com I can successfully go to this server in my browser, hit it via cURL, etc. I want to use this web server for a back-end service in an app I'm building, so I placed this URL in my app's config, and it works great. But when I manually stop and re-started my instance, I see that the public DNS changes! I've read that this happens when you explicitly stop and re-start, but doesn't happen if you just "reboot". I don't plan on explicitly stopping and re-starting this server ever, but my question is: will this public DNS ever change on its own for any reason? E.g. if the machine abnormally crashes, or whatever. In other words, is it safe to ship an app that's wired to this URL? Thanks!

    Read the article

  • Production deployment to EC2 with minimal downtime

    - by jensendarren
    I have a simple web application deployed on a large instance with EC2. I now want to deploy the latest code to this server but I want to do this in a way which minimizes downtime and is a smooth as possible for the end user. Here is my plan: Fire up another large instance Install all the software layers on that instance Restore and attach an EBS drive to the instance Deploy our latest production ready code on the new instance Run all tests (including manual testing of the application) (If tests pass) Put a "Site Under Maintenance" notice on the live site. Backup the EBS instance on the live site Detach the EBS instance from the new server and replace with the latest backup Use ec2-associate-address to move the IP address to the new instance Sit back and wait for traffic to start flowing though the new instance Terminate the old instance Does this seem like a good strategy? Are there any tutorials or books that might cover this topic? I have already read Cloud Application Architectures by George Reese, which is an excellent book, but does not cover deployment. Additionally, I know that there are tools that can help with this like RightScale or enStratus which I will use when I start using more than one instance.

    Read the article

  • Managing persistent data on an Amazon EC2 web server

    - by Derek
    I've just started trying out Amazon's EC2 service for running an asp.net web app which uses a SQL Server 2005 Express database. I have some questions about how to configure and operate it best for reliability, and I'm hoping to tap into some collective wisdom here as this is my first foray into EC2. Here's how I have it configured currently: OS: Windows 2003 SQL Server Express 2005 Web content stored on an EBS Volume (E Drive) Database Data stored on an EBS Volume (E Drive) Database backups to "C Drive" and then copied off to S3. Elastic IP Address attached to the production instance. Now when I make a change to the OS configuration, I make a new AMI using the bundle feature. Unfortunately, I found that this results in significant downtime. While the bundle is created and the new instance is started. It seems that when I'm ready to make a new AMI, I should: Start up a new temporary instance. Detach the EBS volume from the production instance. Detach the IP Address from the production instance. Attach the IP Address to the temporary instance. Attach the EBS volume to the temporary instance. Create an AMI from the production instance. After the production instance restarts, reverse the attach/detach steps to put it back in production. Is this the right order of events to prevent any chance to corrupt the EBS volume? Will the EBS volume become corrupt if I detach it while a database Write is taking place? Should I snapshot the EBS volume of the production instance and attach it to the temporary instance instead? Or could taking a snapshot of the EBS volume while it's in use cause corruption? Any suggestions to improve the reliability and operations?

    Read the article

  • Scaling a LAMP website hosted on EC2

    - by Gublooo
    Hello, I'm very new to all this - I've recently managed to launch my website on EC2. As next step, I want to learn how to scale the website. I have a general idea but wanted some input from the experts about how to go about it. My website is based on LAMP but also has Red5 server which allows users to record messages and also used for playing them back. Currently this is the architecture I'm planning to setup for initial scaling. Deploy four small EC2 instances for the following purposes: Instance-1: On this instance I will run the MySql database Instance-2: On this instance I will run the red5 server Instance-3 & Instance-4 These 2 instances will be used to deploy the website and will have Apache running on them. They will communicate with the mysql server on Instance-1 and red5 server on Instance-2 using the internal IP address. As an when required, I will launch another instance of the same EBS - I will have EBS of say 50 GIG where all the mysql data will be stored. Also red5 will use this EBS to store the video messages Load-Balancer - Use the load balancer provided by Amazon to load balance Instance-3 and Instance-4 This is what I have in mind. I could be way off so please bear with me. Also I have not taken into account the case of scaling MySql server as I currently have no idea about how that will be done and whether or not it is necessary initially. I am aware that Amazon provides auto scaling and mysql scaling as well but I dont want to get into that right now. Your feedback is appreciated Thanks

    Read the article

  • Can I programatically get hold of the Autos/local variables that is shown when debugging?

    - by Stefan
    Im trying to build an error-logger that loggs running values that is active in the function that caused the error. (just for fun so its not a critical problem) When going in break-mode and looking at the locals-tab and autos-tab you can see all active variables (name, type and value), it would be useful to get hold of that for logging purposes when an error occur and on some other occasions. For my example, I just want to find all local variables that are of type string and integer and store the name and value of them. Is this possible with reflection? Any tips or pointers that get me closer to my goal would be very appreciated. I have toyed with using expression on a specifik object (a structure) to create an automapper against a dataset, but I have not done anything like what I ask for above, so please make me happy and say its possible. Thanks.

    Read the article

  • Scaling-out Your Services by Message Bus based WCF Transport Extension &ndash; Part 1 &ndash; Background

    - by Shaun
    Cloud computing gives us more flexibility on the computing resource, we can provision and deploy an application or service with multiple instances over multiple machines. With the increment of the service instances, how to balance the incoming message and workload would become a new challenge. Currently there are two approaches we can use to pass the incoming messages to the service instances, I would like call them dispatcher mode and pulling mode.   Dispatcher Mode The dispatcher mode introduces a role which takes the responsible to find the best service instance to process the request. The image below describes the sharp of this mode. There are four clients communicate with the service through the underlying transportation. For example, if we are using HTTP the clients might be connecting to the same service URL. On the server side there’s a dispatcher listening on this URL and try to retrieve all messages. When a message came in, the dispatcher will find a proper service instance to process it. There are three mechanism to find the instance: Round-robin: Dispatcher will always send the message to the next instance. For example, if the dispatcher sent the message to instance 2, then the next message will be sent to instance 3, regardless if instance 3 is busy or not at that moment. Random: Dispatcher will find a service instance randomly, and same as the round-robin mode it regardless if the instance is busy or not. Sticky: Dispatcher will send all related messages to the same service instance. This approach always being used if the service methods are state-ful or session-ful. But as you can see, all of these approaches are not really load balanced. The clients will send messages at any time, and each message might take different process duration on the server side. This means in some cases, some of the service instances are very busy while others are almost idle. For example, if we were using round-robin mode, it could be happened that most of the simple task messages were passed to instance 1 while the complex ones were sent to instance 3, even though instance 1 should be idle. This brings some problem in our architecture. The first one is that, the response to the clients might be longer than it should be. As it’s shown in the figure above, message 6 and 9 can be processed by instance 1 or instance 2, but in reality they were dispatched to the busy instance 3 since the dispatcher and round-robin mode. Secondly, if there are many requests came from the clients in a very short period, service instances might be filled by tons of pending tasks and some instances might be crashed. Third, if we are using some cloud platform to host our service instances, for example the Windows Azure, the computing resource is billed by service deployment period instead of the actual CPU usage. This means if any service instance is idle it is wasting our money! Last one, the dispatcher would be the bottleneck of our system since all incoming messages must be routed by the dispatcher. If we are using HTTP or TCP as the transport, the dispatcher would be a network load balance. If we wants more capacity, we have to scale-up, or buy a hardware load balance which is very expensive, as well as scaling-out the service instances. Pulling Mode Pulling mode doesn’t need a dispatcher to route the messages. All service instances are listening to the same transport and try to retrieve the next proper message to process if they are idle. Since there is no dispatcher in pulling mode, it requires some features on the transportation. The transportation must support multiple client connection and server listening. HTTP and TCP doesn’t allow multiple clients are listening on the same address and port, so it cannot be used in pulling mode directly. All messages in the transportation must be FIFO, which means the old message must be received before the new one. Message selection would be a plus on the transportation. This means both service and client can specify some selection criteria and just receive some specified kinds of messages. This feature is not mandatory but would be very useful when implementing the request reply and duplex WCF channel modes. Otherwise we must have a memory dictionary to store the reply messages. I will explain more about this in the following articles. Message bus, or the message queue would be best candidate as the transportation when using the pulling mode. First, it allows multiple application to listen on the same queue, and it’s FIFO. Some of the message bus also support the message selection, such as TIBCO EMS, RabbitMQ. Some others provide in memory dictionary which can store the reply messages, for example the Redis. The principle of pulling mode is to let the service instances self-managed. This means each instance will try to retrieve the next pending incoming message if they finished the current task. This gives us more benefit and can solve the problems we met with in the dispatcher mode. The incoming message will be received to the best instance to process, which means this will be very balanced. And it will not happen that some instances are busy while other are idle, since the idle one will retrieve more tasks to make them busy. Since all instances are try their best to be busy we can use less instances than dispatcher mode, which more cost effective. Since there’s no dispatcher in the system, there is no bottleneck. When we introduced more service instances, in dispatcher mode we have to change something to let the dispatcher know the new instances. But in pulling mode since all service instance are self-managed, there no extra change at all. If there are many incoming messages, since the message bus can queue them in the transportation, service instances would not be crashed. All above are the benefits using the pulling mode, but it will introduce some problem as well. The process tracking and debugging become more difficult. Since the service instances are self-managed, we cannot know which instance will process the message. So we need more information to support debug and track. Real-time response may not be supported. All service instances will process the next message after the current one has done, if we have some real-time request this may not be a good solution. Compare with the Pros and Cons above, the pulling mode would a better solution for the distributed system architecture. Because what we need more is the scalability, cost-effect and the self-management.   WCF and WCF Transport Extensibility Windows Communication Foundation (WCF) is a framework for building service-oriented applications. In the .NET world WCF is the best way to implement the service. In this series I’m going to demonstrate how to implement the pulling mode on top of a message bus by extending the WCF. I don’t want to deep into every related field in WCF but will highlight its transport extensibility. When we implemented an RPC foundation there are many aspects we need to deal with, for example the message encoding, encryption, authentication and message sending and receiving. In WCF, each aspect is represented by a channel. A message will be passed through all necessary channels and finally send to the underlying transportation. And on the other side the message will be received from the transport and though the same channels until the business logic. This mode is called “Channel Stack” in WCF, and the last channel in the channel stack must always be a transport channel, which takes the responsible for sending and receiving the messages. As we are going to implement the WCF over message bus and implement the pulling mode scaling-out solution, we need to create our own transport channel so that the client and service can exchange messages over our bus. Before we deep into the transport channel, let’s have a look on the message exchange patterns that WCF defines. Message exchange pattern (MEP) defines how client and service exchange the messages over the transportation. WCF defines 3 basic MEPs which are datagram, Request-Reply and Duplex. Datagram: Also known as one-way, or fire-forgot mode. The message sent from the client to the service, and no need any reply from the service. The client doesn’t care about the message result at all. Request-Reply: Very common used pattern. The client send the request message to the service and wait until the reply message comes from the service. Duplex: The client sent message to the service, when the service processing the message it can callback to the client. When callback the service would be like a client while the client would be like a service. In WCF, each MEP represent some channels associated. MEP Channels Datagram IInputChannel, IOutputChannel Request-Reply IRequestChannel, IReplyChannel Duplex IDuplexChannel And the channels are created by ChannelListener on the server side, and ChannelFactory on the client side. The ChannelListener and ChannelFactory are created by the TransportBindingElement. The TransportBindingElement is created by the Binding, which can be defined as a new binding or from a custom binding. For more information about the transport channel mode, please refer to the MSDN document. The figure below shows the transport channel objects when using the request-reply MEP. And this is the datagram MEP. And this is the duplex MEP. After investigated the WCF transport architecture, channel mode and MEP, we finally identified what we should do to extend our message bus based transport layer. They are: Binding: (Optional) Defines the channel elements in the channel stack and added our transport binding element at the bottom of the stack. But we can use the build-in CustomBinding as well. TransportBindingElement: Defines which MEP is supported in our transport and create the related ChannelListener and ChannelFactory. This also defines the scheme of the endpoint if using this transport. ChannelListener: Create the server side channel based on the MEP it’s. We can have one ChannelListener to create channels for all supported MEPs, or we can have ChannelListener for each MEP. In this series I will use the second approach. ChannelFactory: Create the client side channel based on the MEP it’s. We can have one ChannelFactory to create channels for all supported MEPs, or we can have ChannelFactory for each MEP. In this series I will use the second approach. Channels: Based on the MEPs we want to support, we need to implement the channels accordingly. For example, if we want our transport support Request-Reply mode we should implement IRequestChannel and IReplyChannel. In this series I will implement all 3 MEPs listed above one by one. Scaffold: In order to make our transport extension works we also need to implement some scaffold stuff. For example we need some classes to send and receive message though out message bus. We also need some codes to read and write the WCF message, etc.. These are not necessary but would be very useful in our example.   Message Bus There is only one thing remained before we can begin to implement our scaling-out support WCF transport, which is the message bus. As I mentioned above, the message bus must have some features to fulfill all the WCF MEPs. In my company we will be using TIBCO EMS, which is an enterprise message bus product. And I have said before we can use any message bus production if it’s satisfied with our requests. Here I would like to introduce an interface to separate the message bus from the WCF. This allows us to implement the bus operations by any kinds bus we are going to use. The interface would be like this. 1: public interface IBus : IDisposable 2: { 3: string SendRequest(string message, bool fromClient, string from, string to = null); 4:  5: void SendReply(string message, bool fromClient, string replyTo); 6:  7: BusMessage Receive(bool fromClient, string replyTo); 8: } There are only three methods for the bus interface. Let me explain one by one. The SendRequest method takes the responsible for sending the request message into the bus. The parameters description are: message: The WCF message content. fromClient: Indicates if this message was came from the client. from: The channel ID that this message was sent from. The channel ID will be generated when any kinds of channel was created, which will be explained in the following articles. to: The channel ID that this message should be received. In Request-Reply and Duplex MEP this is necessary since the reply message must be received by the channel which sent the related request message. The SendReply method takes the responsible for sending the reply message. It’s very similar as the previous one but no “from” parameter. This is because it’s no need to reply a reply message again in any MEPs. The Receive method takes the responsible for waiting for a incoming message, includes the request message and specified reply message. It returned a BusMessage object, which contains some information about the channel information. The code of the BusMessage class is 1: public class BusMessage 2: { 3: public string MessageID { get; private set; } 4: public string From { get; private set; } 5: public string ReplyTo { get; private set; } 6: public string Content { get; private set; } 7:  8: public BusMessage(string messageId, string fromChannelId, string replyToChannelId, string content) 9: { 10: MessageID = messageId; 11: From = fromChannelId; 12: ReplyTo = replyToChannelId; 13: Content = content; 14: } 15: } Now let’s implement a message bus based on the IBus interface. Since I don’t want you to buy and install the TIBCO EMS or any other message bus products, I will implement an in process memory bus. This bus is only for test and sample purpose. It can only be used if the service and client are in the same process. Very straightforward. 1: public class InProcMessageBus : IBus 2: { 3: private readonly ConcurrentDictionary<Guid, InProcMessageEntity> _queue; 4: private readonly object _lock; 5:  6: public InProcMessageBus() 7: { 8: _queue = new ConcurrentDictionary<Guid, InProcMessageEntity>(); 9: _lock = new object(); 10: } 11:  12: public string SendRequest(string message, bool fromClient, string from, string to = null) 13: { 14: var entity = new InProcMessageEntity(message, fromClient, from, to); 15: _queue.TryAdd(entity.ID, entity); 16: return entity.ID.ToString(); 17: } 18:  19: public void SendReply(string message, bool fromClient, string replyTo) 20: { 21: var entity = new InProcMessageEntity(message, fromClient, null, replyTo); 22: _queue.TryAdd(entity.ID, entity); 23: } 24:  25: public BusMessage Receive(bool fromClient, string replyTo) 26: { 27: InProcMessageEntity e = null; 28: while (true) 29: { 30: lock (_lock) 31: { 32: var entity = _queue 33: .Where(kvp => kvp.Value.FromClient == fromClient && (kvp.Value.To == replyTo || string.IsNullOrWhiteSpace(kvp.Value.To))) 34: .FirstOrDefault(); 35: if (entity.Key != Guid.Empty && entity.Value != null) 36: { 37: _queue.TryRemove(entity.Key, out e); 38: } 39: } 40: if (e == null) 41: { 42: Thread.Sleep(100); 43: } 44: else 45: { 46: return new BusMessage(e.ID.ToString(), e.From, e.To, e.Content); 47: } 48: } 49: } 50:  51: public void Dispose() 52: { 53: } 54: } The InProcMessageBus stores the messages in the objects of InProcMessageEntity, which can take some extra information beside the WCF message itself. 1: public class InProcMessageEntity 2: { 3: public Guid ID { get; set; } 4: public string Content { get; set; } 5: public bool FromClient { get; set; } 6: public string From { get; set; } 7: public string To { get; set; } 8:  9: public InProcMessageEntity() 10: : this(string.Empty, false, string.Empty, string.Empty) 11: { 12: } 13:  14: public InProcMessageEntity(string content, bool fromClient, string from, string to) 15: { 16: ID = Guid.NewGuid(); 17: Content = content; 18: FromClient = fromClient; 19: From = from; 20: To = to; 21: } 22: }   Summary OK, now I have all necessary stuff ready. The next step would be implementing our WCF message bus transport extension. In this post I described two scaling-out approaches on the service side especially if we are using the cloud platform: dispatcher mode and pulling mode. And I compared the Pros and Cons of them. Then I introduced the WCF channel stack, channel mode and the transport extension part, and identified what we should do to create our own WCF transport extension, to let our WCF services using pulling mode based on a message bus. And finally I provided some classes that need to be used in the future posts that working against an in process memory message bus, for the demonstration purpose only. In the next post I will begin to implement the transport extension step by step.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Assign table values to multiple variables using a single SELECT statement and CASE?

    - by Darth Continent
    I'm trying to assign values contained in a lookup table to multiple variables by using a single SELECT having multiple CASE statements. The table is a lookup table with two columns like so: [GreekAlphabetastic] SystemID Descriptor -------- ---------- 1 Alpha 2 Beta 3 Epsilon This is my syntax: SELECT @VariableTheFirst = CASE WHEN myField = 'Alpha' THEN tbl.SystemID END, @VariableTheSecond = CASE WHEN myField = 'Beta' THEN tbl.SystemID END, @VariableTheThird = CASE WHEN myField = 'Epsilon' THEN tbl.SystemID END FROM GreekAlphabetastic tbl However, when I check the variables after this statement executes, I expected each to be assigned the appropriate value, but instead only the last has a value assigned. SELECT @VariableTheFirst AS First, @VariableTheSecond AS Second, @VariableTheThird AS Third Results: First Second Third NULL NULL 3 What am I doing wrong?

    Read the article

  • How to better create stacked bar graphs with multiple variables from ggplot2?

    - by deoksu
    I often have to make stacked barplots to compare variables, and because I do all my stats in R, I prefer to do all my graphics in R with ggplot2. I would like to learn how to do two things: First, I would like to be able to add proper percentage tick marks for each variable rather than tick marks by count. Counts would be confusing, which is why I take out the axis labels completely. Second, there must be a simpler way to reorganize my data to make this happen. It seems like the sort of thing I should be able to do natively in ggplot2 with plyR, but the documentation for plyR is not very clear (and I have read both the ggplot2 book and the online plyR documentation. My best graph looks like this, the code to create it follows: the R code I use to get it is the following: library(epicalc) ### recode the variables to factors ### recode(c(int_newcoun, int_newneigh, int_neweur, int_newusa, int_neweco, int_newit, int_newen, int_newsp, int_newhr, int_newlit, int_newent, int_newrel, int_newhth, int_bapo, int_wopo, int_eupo, int_educ), c(1,2,3,4,5,6,7,8,9, NA), c('Very Interested','Somewhat Interested','Not Very Interested','Not At All interested',NA,NA,NA,NA,NA,NA)) ### Combine recoded variables to a common vector Interest1<-c(int_newcoun, int_newneigh, int_neweur, int_newusa, int_neweco, int_newit, int_newen, int_newsp, int_newhr, int_newlit, int_newent, int_newrel, int_newhth, int_bapo, int_wopo, int_eupo, int_educ) ### Create a second vector to label the first vector by original variable ### a1<-rep("News about Bangladesh", length(int_newcoun)) a2<-rep("Neighboring Countries", length(int_newneigh)) [...] a17<-rep("Education", length(int_educ)) Interest2<-c(a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15, a16, a17) ### Create a Weighting vector of the proper length ### Interest.weight<-rep(weight, 17) ### Make and save a new data frame from the three vectors ### Interest.df<-cbind(Interest1, Interest2, Interest.weight) Interest.df<-as.data.frame(Interest.df) write.csv(Interest.df, 'C:\\Documents and Settings\\[name]\\Desktop\\Sweave\\InterestBangladesh.csv') ### Sort the factor levels to display properly ### Interest.df$Interest1<-relevel(Interest$Interest1, ref='Not Very Interested') Interest.df$Interest1<-relevel(Interest$Interest1, ref='Somewhat Interested') Interest.df$Interest1<-relevel(Interest$Interest1, ref='Very Interested') Interest.df$Interest2<-relevel(Interest$Interest2, ref='News about Bangladesh') Interest.df$Interest2<-relevel(Interest$Interest2, ref='Education') [...] Interest.df$Interest2<-relevel(Interest$Interest2, ref='European Politics') detach(Interest) attach(Interest) ### Finally create the graph in ggplot2 ### library(ggplot2) p<-ggplot(Interest, aes(Interest2, ..count..)) p<-p+geom_bar((aes(weight=Interest.weight, fill=Interest1))) p<-p+coord_flip() p<-p+scale_y_continuous("", breaks=NA) p<-p+scale_fill_manual(value = rev(brewer.pal(5, "Purples"))) p update_labels(p, list(fill='', x='', y='')) I'd very much appreciate any tips, tricks or hints. Thanks.

    Read the article

  • Creating a list of most popular posts of the past week - Wordpress

    - by Gary Woods
    I have created a widget for my Wordpress platform that displays the most popular posts of the week. However, there is an issue with it. It counts the most popular posts from Monday, not the past 7 days. For instance, this means that on Tuesday, it will only include posts from Tuesday and Monday. Here is my widget code: <?php class PopularWidget extends WP_Widget { function PopularWidget(){ $widget_ops = array('description' => 'Displays Popular Posts'); $control_ops = array('width' => 400, 'height' => 300); parent::WP_Widget(false,$name='ET Popular Widget',$widget_ops,$control_ops); } /* Displays the Widget in the front-end */ function widget($args, $instance){ extract($args); $title = apply_filters('widget_title', empty($instance['title']) ? 'Popular This Week' : $instance['title']); $postsNum = empty($instance['postsNum']) ? '' : $instance['postsNum']; $show_thisweek = isset($instance['thisweek']) ? (bool) $instance['thisweek'] : false; echo $before_widget; if ( $title ) echo $before_title . $title . $after_title; ?> <?php $additional_query = $show_thisweek ? '&year=' . date('Y') . '&w=' . date('W') : ''; query_posts( 'post_type=post&posts_per_page='.$postsNum.'&orderby=comment_count&order=DESC' . $additional_query ); ?> <div class="widget-aligned"> <h3 class="box-title">Popular Articles</h3> <div class="blog-entry"> <ol> <?php if (have_posts()) : while (have_posts()) : the_post(); ?> <li><h4 class="title"><a href="<?php the_permalink(); ?>"><?php the_title(); ?></a></h4></li> <?php endwhile; endif; wp_reset_query(); ?> </ol> </div> </div> <!-- end widget-aligned --> <div style="clear:both;"></div> <?php echo $after_widget; } /*Saves the settings. */ function update($new_instance, $old_instance){ $instance = $old_instance; $instance['title'] = stripslashes($new_instance['title']); $instance['postsNum'] = stripslashes($new_instance['postsNum']); $instance['thisweek'] = 0; if ( isset($new_instance['thisweek']) ) $instance['thisweek'] = 1; return $instance; } /*Creates the form for the widget in the back-end. */ function form($instance){ //Defaults $instance = wp_parse_args( (array) $instance, array('title'=>'Popular Posts', 'postsNum'=>'','thisweek'=>false) ); $title = htmlspecialchars($instance['title']); $postsNum = htmlspecialchars($instance['postsNum']); # Title echo '<p><label for="' . $this->get_field_id('title') . '">' . 'Title:' . '</label><input class="widefat" id="' . $this->get_field_id('title') . '" name="' . $this->get_field_name('title') . '" type="text" value="' . $title . '" /></p>'; # Number of posts echo '<p><label for="' . $this->get_field_id('postsNum') . '">' . 'Number of posts:' . '</label><input class="widefat" id="' . $this->get_field_id('postsNum') . '" name="' . $this->get_field_name('postsNum') . '" type="text" value="' . $postsNum . '" /></p>'; ?> <input class="checkbox" type="checkbox" <?php checked($instance['thisweek'], 1) ?> id="<?php echo $this->get_field_id('thisweek'); ?>" name="<?php echo $this->get_field_name('thisweek'); ?>" /> <label for="<?php echo $this->get_field_id('thisweek'); ?>"><?php esc_html_e('Popular this week','Aggregate'); ?></label> <?php } }// end AboutMeWidget class function PopularWidgetInit() { register_widget('PopularWidget'); } add_action('widgets_init', 'PopularWidgetInit'); ?> How can I change this script so that it will count the past 7 days rather than posts from last Monday?

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >