Search Results

Search found 28747 results on 1150 pages for 'switch case'.

Page 697/1150 | < Previous Page | 693 694 695 696 697 698 699 700 701 702 703 704  | Next Page >

  • Regex to detect a proper permalink?

    - by Fedor
    These permalinks above are rerouted to my page: page.php?permalink=events/foo page.php?permalink=events/foo/ page.php?permalink=ru/events/foo page.php?permalink=ru/events/foo/ The events is dynamic, it could be specials or packages. My dilemma is basically; I need to detect an empty link in order so I can feed a robots no index meta tag in the case of: page.php?permalink=events page.php?permalink=events/ page.php?permalink=ru/events/ page.php?permalink=ru/events I can't use a simple pattern such as [a-zA-Z]+\/?(.+)/ since it won't work on the i18n permalinks. What regex could I use which would detect this, using $_GET['permalink'] as the reference to the permalinks? And avoid false positives? Update: Empty link means there's no fragment after the "events/" part. These are empty: page.php?permalink=events page.php?permalink=events/ page.php?permalink=ru/events/ page.php?permalink=ru/events

    Read the article

  • Entity Framework 4.3.1 Code based Migrations and Connector/Net 6.6

    - by GABMARTINEZ
     Code-based migrations is a new feature as part of the Connector/Net support for Entity Framework 4.3.1. In this tutorial we'll see how we can use it so we can keep track of the changes done to our database creating a new application using the code first approach. If you don't have a clear idea about how code first works we highly recommend you to check this subject before going further with this tutorial. Creating our Model and Database with Code First  From VS 2010  1. Create a new console application 2.  Add the latest Entity Framework official package using Package Manager Console (Tools Menu, then Library Package Manager -> Package Manager Console). In the Package Manager Console we have to type  Install-Package EntityFramework This will add the latest version of this library.  We will also need to make some changes to your config file. A <configSections> was added which contains the version you have from EntityFramework.  An <entityFramework> section was also added where you can set up some initialization. This section is optional and by default is generated to use SQL Express. Since we don't need it for now (we'll see more about it below) let's leave this section empty as shown below. 3. Create a new Model with a simple entity. 4. Enable Migrations to generate the our Configuration class. In the Package Manager Console we have to type  Enable-Migrations; This will make some changes in our application. It will create a new folder called Migrations where all the migrations representing the changes we do to our model.  It will also create a Configuration class that we'll be using to initialize our SQL Generator and some other values like if we want to enable Automatic Migrations.  You can see that it already has the name of our DbContext. You can also create you Configuration class manually. 5. Specify our Model Provider. We need to specify in our Class Configuration that we'll be using MySQLClient since this is not part of the generated code. Also please make sure you have added the MySql.Data and the MySql.Data.Entity references to your project. using MySql.Data.Entity;   // Add the MySQL.Data.Entity namespace public Configuration() { this.AutomaticMigrationsEnabled = false; SetSqlGenerator("MySql.Data.MySqlClient", new MySql.Data.Entity.MySqlMigrationSqlGenerator());    // This will add our MySQLClient as SQL Generator } 6. Add our Data Provider and set up our connection string <connectionStrings> <add name="PersonalContext" connectionString="server=localhost;User Id=root;database=Personal;" providerName="MySql.Data.MySqlClient" /> </connectionStrings> <system.data> <DbProviderFactories> <remove invariant="MySql.Data.MySqlClient" /> <add name="MySQL Data Provider" invariant="MySql.Data.MySqlClient" description=".Net Framework Data Provider for MySQL" type="MySql.Data.MySqlClient.MySqlClientFactory, MySql.Data, Version=6.6.2.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d" /> </DbProviderFactories> </system.data> * The version recommended to use of Connector/Net is 6.6.2 or earlier. At this point we can create our database and then start working with Migrations. So let's do some data access so our database get's created. You can run your application and you'll get your database Personal as specified in our config file. Add our first migration Migrations are a great resource as we can have a record for all the changes done and will generate the MySQL statements required to apply these changes to the database. Let's add a new property to our Person class public string Email { get; set; } If you try to run your application it will throw an exception saying  The model backing the 'PersonelContext' context has changed since the database was created. Consider using Code First Migrations to update the database (http://go.microsoft.com/fwlink/?LinkId=238269). So as suggested let's add our first migration for this change. In the Package Manager Console let's type Add-Migration AddEmailColumn Now we have the corresponding class which generate the necessary operations to update our database. namespace MigrationsFromScratch.Migrations { using System.Data.Entity.Migrations; public partial class AddEmailColumn : DbMigration { public override void Up(){ AddColumn("People", "Email", c => c.String(unicode: false)); } public override void Down() { DropColumn("People", "Email"); } } } In the Package Manager Console let's type Update-Database Now you can check your database to see all changes were succesfully applied. Now let's add a second change and generate our second migration public class Person   {       [Key]       public int PersonId { get; set;}       public string Name { get; set; }       public string Address {get; set;}       public string Email { get; set; }       public List<Skill> Skills { get; set; }   }   public class Skill   {     [Key]     public int SkillId { get; set; }     public string Description { get; set; }   }   public class PersonelContext : DbContext   {     public DbSet<Person> Persons { get; set; }     public DbSet<Skill> Skills { get; set; }   } If you would like to customize any part of this code you can do that at this step. You can see there is the up method which can update your database and the down that can revert the changes done. If you customize any code you should make sure to customize in both methods. Now let's apply this change. Update-database -verbose I added the verbose flag so you can see all the SQL generated statements to be run. Downgrading changes So far we have always upgraded to the latest migration, but there may be times when you want downgrade to a specific migration. Let's say we want to return to the status we have before our last migration. We can use the -TargetMigration option to specify the migration we'd like to return. Also you can use the -verbose flag. If you like to go  back to the Initial state you can do: Update-Database -TargetMigration:$InitialDatabase  or equivalent: Update-Database -TargetMigration:0  Migrations doesn't allow by default a migration that would ocurr in a data loss. One case when you can got this message is for example in a DropColumn operation. You can override this configuration by setting AutomaticMigrationDataLossAllowed to true in the configuration class. Also you can set your Database Initializer in case you want that these Migrations can be applied automatically and you don't have to go all the way through creating a migration and updating later the changes. Let's see how. Database Initialization by Code We can specify an initialization strategy by using Database.SetInitializer (http://msdn.microsoft.com/en-us/library/gg679461(v=vs.103)). One of the strategies that I found very useful when you are at a development stage (I mean not for production) is the MigrateDatabaseToLatestVersion. This strategy will make all the necessary migrations each time there is a change in our model that needs a database replication, this also implies that we have to enable AutomaticMigrationsEnabled flag in our Configuration class. public Configuration()         {             AutomaticMigrationsEnabled = true;             AutomaticMigrationDataLossAllowed = true;             SetSqlGenerator("MySql.Data.MySqlClient", new MySql.Data.Entity.MySqlMigrationSqlGenerator());    // This will add our MySQLClient as SQL Generator          } In the new EntityFramework section of your Config file we can set this at a context level basis.  The syntax is as follows: <contexts> <context type="Custom DbContext name, Assembly name"> <databaseInitializer type="System.Data.Entity.MigrateDatabaseToLatestVersion`2[[ Custom DbContext name, Assembly name],  [Configuration class name, Assembly name]],  EntityFramework" /> </context> </contexts> In our example this would be: The syntax is kind of odd but very convenient. This way all changes will always be applied when we do any data access in our application. There are a lot of new things to explore in EF 4.3.1 and Migrations so we'll continue writing some more posts about it. Please let us know if you have any questions or comments, also please check our forums here where we keep answering questions in general for the community.  Hope you found this information useful. Happy MySQL/.Net Coding! 

    Read the article

  • Storing date and time as epoch vs native datetime format in the database

    - by zakovyrya
    For most of my tasks I find it much easier to work with date and time in the epoch format: it's trivial to calculate timespan or determine if some event happened before or after another, I don't have to deal with time-zone issues if the data comes from different geographical sources, in case of scripting languages what I usually get from database when I request a datetime-typed column is a string that I need to parse in order to work with it. This list can go on, but for me in order to keep my code portable that's enough to ditch database's native datetime format and store date and time as integer. What do you guys think?

    Read the article

  • Detect the URI encoding automatically in Tomcat

    - by Roland Illig
    I have an instance of Apache Tomcat 6.x running, and I want it to interpret the character set of incoming URLs a little more intelligent than the default behavior. In particular, I want to achieve the following mapping: So%DFe => Soße So%C3%9Fe => Soße So%DF%C3%9F => (error) The bevavior I want could be described as "try to decode the byte stream as UTF-8, and if it doesn't work assume ISO-8859-1". Simply using the URIEncoding configuration doesn't work in that case. So how can I configure Tomcat to encode the request the way I want? I might have to write a Filter that takes the request (especially the query string) and re-encodes it into the parameters. Would that be the natural way?

    Read the article

  • ASP.NET MaskedEditExtender issue

    - by Eyla
    I have asp.net textbox with MaskedEditExtender. I set ClearTextOnInvalid property to ture so when invalid text is entered should clear the text. in my case I have mask=(999)999-9999, when I enter forexample 4 number should then the focus lost should clear the text because i did not enter complete number that fit the mask?? what I want to clear the text when the focus lost if no complete number has entered!! here is my code: <asp:TextBox ID="txtHomePhone" runat="server" Style="top: 147px; left: 543px; position: absolute; height: 22px; width: 128px" ></asp:TextBox> <cc1:MaskedEditExtender ID="txtHomePhone_MEE" runat="server" CultureAMPMPlaceholder="" CultureCurrencySymbolPlaceholder="" CultureDateFormat="" CultureDatePlaceholder="" CultureDecimalPlaceholder="" CultureThousandsPlaceholder="" CultureTimePlaceholder="" Enabled="True" Mask="(999)999-9999" TargetControlID="txtHomePhone" AutoComplete="False" ClearMaskOnLostFocus="true" ClearTextOnInvalid="True"> </cc1:MaskedEditExtender>

    Read the article

  • Remote Control software for Mac

    - by MarqueIV
    One of the things I like about Microsoft's RDC Client is that the resolution of the experience is set by the client and not, say, a physical monitor connected to the host, as is the case with VNC; the latter being the protocol used by Mac. This means that even though I'm connecting to a notebook with a 1280x800 physical resolution, via RDC I can run it at 2560x1600 on my 30" monitor. However, that only seems to work for RDC. Does anyone know of something I can run on the Mac that will allow me to remotely control it at a different resolution than what is physically set? TIA, Mark

    Read the article

  • Flex TextArea leading invalid

    - by Andreas
    Hi, When I set leading to 0 with Verdana (and others to) for a Flex TextArea I get strange results: fontsize - space between baselines 8 - 10 (125%) 10 - 12 (120%) 12 - 14 (117%) 14 - 17 (121%) 16 - 18 (113%) 18 - 22 (122%) 20 - 25 (125%) 25 - 31 (124%) shouldn't all of these be the same (should only be 100%)? Or least follow a pattern, because I can't se any. :( When i set leading to 0, the space between my baselines should be the same as the font size/height. This is not the case here. How do I know how to get the space between the baseline to be equal to the height of the font?

    Read the article

  • Public-facing SharePoint 2007 portal - authentication question

    - by jdcorr
    I am involved in developing a portal with a public-facing side. For this i created a web application with windows authentication for intranet zone and after that, I created an extension for an internet zone with fba. In the internet extension we have the following requirement: - able to acess to sharepoint backoffice using fba. - have a authentication mecanism for portal visitors, where they can authenticate and acess to a page where they can subscribe the newsletter and define some site appearance (this users can't acess to sharepoint backoffice). My idea is use the aspnet membership provider to authenticate both users and create diferente roles for them. Anyone suggests another approach? Is there any way to ensure that visitors (2 case) do not enter the backoffice portal? Thanks

    Read the article

  • How to properly name record creation(insertion) datetime field ?

    - by alpav
    If I create a table with datetime default getdate() field that is intended to keep date&time of record insertion, which name is better to use for that field ? I like to use Created and I've seen people use DateCreated or CreateDate. Other possible candidates that I can think of are: CreatedDate, CreateTime, TimeCreated, CreateDateTime, DateTimeCreated, RecordCreated, Inserted, InsertedDate, ... From my point of view anything with Date inside name looks bad because it can be confused with date part in case if I have 2 fields: CreateDate,CreateTime, so I wonder if there are any specific recommendations/standards in that area based on real reasons, not just style, mood or consistency. Of course, if there are 100 existing tables and this is table 101 then I would use same naming convention as used in those 100 tables for the sake of consistency, but this question is about first table in first database in first server in first application.

    Read the article

  • Robust fault tolerant MySQL replication

    - by Joshua
    Is there any way to get a fault tolerant MySQL replication? I am in an environment that has many networking issues. It appears that replication gets an error and just stops. I need it to continue to work and recover from these faults. There is some wrapper software that checks the state of replication and restarts it in the case of losing its log position. Is there an alternative? Note: Replication is done from an embedded computer with MySQL 4.1 to a external computer that has MySQL 5.0.45

    Read the article

  • Amazon S3 Add METADATA to existing KEY

    - by Daveo
    In S3 REST API I am adding metadata to an existing object by using the PUT (Copy) command and copying a key to the same location with 'x-amz-metadata-directive' = 'REPLACE' What I want to do is change the download file name by setting: Content-Disposition: attachment; filename=foo.bar; This sets the metadata correctly but when I download the file it still uses the keyname instead of 'foo.bar' I use a software tool S3 Browser to view the metadata and it looks correct (apart from 'Content-Disposition' being all lower case as that's was S3 ask me to sign) Then using S3 Browser I just pressed, then save without changing anything and now it works??? What am I missing how come setting a metadata 'Content-Disposition: attachment; filename=foo.bar;' from my web app does not work but does work from S3 Browser?

    Read the article

  • How to stop the Bottle webserver when started from subprocess

    - by luc
    Hello all, I would like to embed the great Bottle web framework into a small application (1st target is Windows OS). This app starts the bottle webserver thanks to the subprocess module. import subprocess p = subprocess.Popen('python websrv.py') The bottle app is quite simple @route("/") def index(): return template('index') run(reloader=True) It starts the default webserver into a Windows console. All seems Ok except the fact that I must press Ctrl-C to close the bottle webserver. I would like that the master app terminates the webserver when it shutdowns. I can't find a way to do that (p.terminate() doesn't work in this case unfortunately) Any idea? Thanks in advance

    Read the article

  • Cython Speed Boost vs. Usability

    - by zubin71
    I just came across Cython, while I was looking out for ways to optimize Python code. I read various posts on stackoverflow, the python wiki and read the article "General Rules for Optimization". Cython is something which grasps my interest the most; instead of writing C-code for yourself, you can choose to have other datatypes in your python code itself. Here is a silly test i tried, #!/usr/bin/python # test.pyx def test(value): for i in xrange(value): i**2 if(i==1000000): print i test(10000001) $ time python test.pyx real 0m16.774s user 0m16.745s sys 0m0.024s $ time cython test.pyx real 0m0.513s user 0m0.196s sys 0m0.052s Now, honestly, i`m dumbfounded. The code which I have used here is pure python code, and all I have changed is the interpreter. In this case, if cython is this good, then why do people still use the traditional Python interpretor? Are there any reliability issues for Cython?

    Read the article

  • Iterate through all form fields within a specified DIV tag.

    - by user344255
    I need to be able to iterate through all the form fields within a specified DIV tag. Basically, any given DIV tag can have multiple form fields (which is easy enough to parse through), but it can also any number of tables or even additional DIV tags (adding additional levels of hierarchical layering). I've written a basic function that goes through each of the direct descendants of the parent node (in this case, the DIV tag) and it clears out its value. This part works fine. The problem is getting it to parse children when children (grandchildren) of their own. It winds up getting caught up in an infinite loop. In this case, I need be able to find all the form fields within DIV tag "panSomePanel", which will include some direct children (txtTextField1), but also some grandchildren who are within nested TABLE objects and/or nested DIV tags (radRadioButton, DESC_txtTextArea). Here is a sample DIV and its contents: <DIV id="panSomePanel"> <INPUT name="txtTextField1" type="text" id="txtTextField1" size="10"/><BR><BR> <TABLE id="tblRadioButtons" border="0"> <TR> <TD> <INPUT id="radRadioButton_0" type="radio" name="radRadioButton" value="1" /><LABEL for="radRadioButton_0">Value 1</LABEL> </TD> <TD> <INPUT id="radRadioButton_5" type="radio" name="radRadioButton" value="23" /><LABEL for="radRadioButton_5">Value 23</LABEL> </TD> </TR> <TR> <TD> <INPUT id="radRadioButton_1" type="radio" name="radRadioButton" value="2" /><LABEL for="radRadioButton_1">Value 2</LABEL> </TD> <TD> <INPUT id="radRadioButton_6" type="radio" name="radRadioButton" value="24" /><LABEL for="radRadioButton_6">Value 24</LABEL> </TD> </TR> <TR> <TD> <INPUT id="radRadioButton_2" type="radio" name="radRadioButton" value="3" /><LABEL for="radRadioButton_2">Value 3</LABEL> </TD> <TD> <INPUT id="radRadioButton_7" type="radio" name="radRadioButton" value="25" /><LABEL for="radRadioButton_7">Value 25</LABEL> </TD> </TR> <TR> <TD> <INPUT id="radRadioButton_3" type="radio" name="radRadioButton" value="21" /><LABEL for="radRadioButton_3">Value 21</LABEL> </TD> <TD> <INPUT id="radRadioButton_8" type="radio" name="radRadioButton" value="4" /><LABEL for="radRadioButton_8">Value 4</LABEL> </TD> </TR> <TR> <TD> <INPUT id="radRadioButton_4" type="radio" name="radRadioButton" value="22" /><LABEL for="radRadioButton_4">Value 22</LABEL> </TD> </TR> </TABLE> <DIV id="panAnotherPanel"><BR> <TABLE cellpadding="0" cellspacing="0" border="0" style="display:inline;vertical-align:top;"> <TR> <TD valign="top"> <TEXTAREA name="DESC:txtTextArea" rows="3" cols="48" id="DESC_txtTextArea"></TEXTAREA>&nbsp; </TD> <TD valign="top"><SPAN id="DESC_lblCharCount" style="font-size:8pt;"></SPAN> </TD> </TR> </TABLE> </DIV> </DIV> Here is the function I've written: function clearChildren(node) { var child; if (node.childNodes.length > 0) { child= node.firstChild; } while(child) { if (child.type == "text") { alert(child.id); child.value = ""; } else if (child.type == "checkbox") { child.checked = false; } else if (child.type == "radio") { alert(child.id); child.checked = false; } else if (child.type == "textarea") { child.innerText = ""; } //alert(child.childNodes.length); if (child.childNodes.length > 0) { var grandchild = child.firstChild; while (grandchild) { clearChildren(grandchild); } grandchild = grandchild.nextSibling; } child = child.nextSibling; } }

    Read the article

  • Industry Reports on Source Control Tools

    - by Kent Boogaart
    Hi, I'm looking for independent industry reports that compare and contrast the various source control tools out there. In particular, I care about Clearcase vs Sourcesafe vs SVN, but if the report includes other SCM systems that's fine. I need this for a client who wants to get a feel on exactly what they stand to gain switching to SVN (yes, from Clearcase and VSS). In other words, something I can use to sell it to their business. I'm hoping some case studies have been done on developer productivity with these tools and resultant reports made freely available. Thanks, Kent

    Read the article

  • HTML 5 Canvas - Dynamically include multiple images in canvas

    - by Ron
    I need to include multiple images in a canvas. I want to load them dynamically via a for-loop. I tried a lot but in the best case only the last image is displayed. I check this tread but I still get only the last image. For explanation, here's my latest code(basically the one from the other post): for (var i = 0; i <= max; i++) { thisWidth = 250; thisHeight = 0; imgSrc = "photo_"+i+".jpg"; letterImg = new Image(); letterImg.onload = function() { context.drawImage(letterImg,thisWidth*i,thisHeight); } letterImg.src = imgSrc; } Any ideas?

    Read the article

  • HTTP redirect fallback

    - by Ondrej Stastny
    Hi, Is there a way to provide fallback URL when HTTP redirect times out? What I'm trying to achieve is that when I hit the original url, I would respond with HTTP redirect (300, 307?) giving the browser new URL and the fallback URL in case the new URL times out? I was also considering doing this client side but it just does not seem to be very effective (in terms of speed, client support). What I would do is probably have a tiny 1x1px image on each server, try loading it and then check with javascript which server is up and redirect there. Any other ideas? Thanks

    Read the article

  • How to capture shift-tab in vim

    - by Yogesh Arora
    I want to use shift-tab for auto completion and shifting code blocks visually. I have been referring to Make_Shift-Tab_work . That link talks about mapping ^[[Z to shift-tab . But i don't get ^[[Z when i press shift-tab. i just get a normal tab in that case. It then talks about using xmodmap -pke | grep 'Tab' to map tab keys. According to that the output should be keycode 23 = Tab or keycode 23 = Tab ISO_Left_Tab However i get keycode 22 = Tab KP_Tab if i use xmodmap -e 'keycode 22 = Tab ISO_Left_Tab' and after that xmodmap -pke | grep 'Tab', I still get keycode 22 = Tab KP_Tab This means running xmodmap -e 'keycode 22 = Tab ISO_Left_Tab' has no effect. In the end the link mentions using xev to see what X recieves when i press shift-tab. But i dont have xev on my system. Is there any other way i can capture shift-tab in vim

    Read the article

  • Wisdom of merging 100s of Oracle instances into one instance

    - by hoytster
    Our application runs on the web, is mostly an inquiry tool, does some transactions. We host the Oracle database. The app has always had a different instance of Oracle for each customer. A customer is a company which pays us to provide our service to the company's employees, typically 10,000-25,000 employees per customer. We do a major release every few years, and migrating to that new release is challenging: we might have a team at the customer site for a couple weeks, explaining new functionality and setting up the driving data to suit that customer. We're considering going multi-client, putting all our customers into a single shared Oracle 11g instance on a big honkin' Windows Server 2008 server -- in order to reduce costs. I'm wondering if that's advisable. There are some advantages to having separate instances for each customer. Tell me if these are bogus, please. In my rough guess about decreasing importance: Our customers MyCorp and YourCo can be migrated separately when breaking changes are made to the schema. (With multi-client, we'd be migrating 300+ customers overnight!?!) MyCorp's data can be easily backed up and (!!!) restored, without affecting other customers. MyCorp's data is securely separated from their competitor YourCo's data, without depending on developers to get the code right and/or DBAs getting the configuration right. Performance is better because the database is smaller (5,000 vs 2,000,000 rows in ~50 tables). If MyCorp's offices are (mostly) in just one region, then the MyCorp's instance can be geographically co-located there, so network lag doesn't hurt performance. We can provide better service to global clients, for the same reason. In MyCorp wants to take their database in-house, then we can easily export their instance, to get MyCorp their data. Load-balancing is easier because instances can be placed on different servers (this is with a web farm). When a DEV or QA instance is needed, it's easier to clone the real instance and anonymize the data, because there's much less data. Because they're small enough, developers can have their own instance running locally, so they can work on code while waiting at the airport and while in-flight, without fighting VPN hassles. Q1: What are other advantages of separate instances? We are contemplating changing the database schema and merging all of our customers into one Oracle instance, running on one hefty server. Here are advantages of the multi-client instance approach, most important first (my WAG). Please snipe if these are bogus: Less work for the DBAs, since they only need to maintain one instance instead of hundreds. Less DBA work translates to cheaper, our main motive for this change. With just one instance, the DBAs can do a better job of optimizing performance. They'll have time to add appropriate indexes and review our SQL. It will be easier for developers to debug & enhance the application, because there is only one schema and one app (there might be dozens of schema versions if there are hundreds of instances, with a different version of the app for each version of the schema). This reduces costs too. The alternative is having to start every debug session with (1) What version is this customer running and (2) Let's struggle to recreate the corresponding development environment, code and database. (We need a Virtual Machine that includes the code AND database instance for each patch and release!) Licensing Oracle is cheaper because it's priced per server irrespective of heft (or something -- I don't know anything about the subject). The database becomes a viable persistent store for web session data, because there is just one instance. Some database operations are easier with one multi-client instance, like finding a participant when they're hazy about which customer they (or their spouse, maybe) works for: all the names are in one table. Reporting across customers is straightforward. Q2: What are other advantages of having multiple clients in one instance? Q3: Which approach do you think is better (why)? Instance per customer, or all customers in one instance? I'm concerned that having one multi-client instance makes migration near-impossible, and that's a deal killer... ... unless there is a compromise solution like having two multi-client instances, the old and the new. In that case case, we would design cross-instance solutions for finding participants, reporting, etc. so customers could go from one multi-client instance to the next without anything breaking. THANKS SO MUCH for your collective advice! This issue is beyond me -- but not beyond the collective you. :) Hoytster

    Read the article

  • Using Parallel Extensions with ThreadStatic attribute. Could it leak memory?

    - by the-locster
    I'm using Parallel Extensions fairly heavily and I've just now encountered a case where using thread locla storrage might be sensible to allow re-use of objects by worker threads. As such I was lookign at the ThreadStatic attribute which marks a static field/variable as having a unique value per thread. It seems to me that it would be unwise to use PE with the ThreadStatic attribute without any guarantee of thread re-use by PE. That is, if threads are created and destroyed to some degree would the variables (and thus objects they point to) remain in thread local storage for some indeterminate amount of time, thus causing a memory leak? Or perhaps the thread storage is tied to the threads and disposed of when the threads are disposed? But then you still potentially have threads in a pool that are longed lived and that accumulate thread local storage from various pieces of code the threads are used for. Is there a better approach to obtaining thread local storage with PE? Thankyou.

    Read the article

  • Running OpenStack Icehouse with ZFS Storage Appliance

    - by Ronen Kofman
    Couple of months ago Oracle announced the support for OpenStack Cinder plugin with ZFS Storage Appliance (aka ZFSSA).  With our recent release of the Icehouse tech preview I thought it is a good opportunity to demonstrate the ZFSSA plugin working with Icehouse. One thing that helps a lot to get started with ZFSSA is that it has a VirtualBox simulator. This simulator allows users to try out the appliance’s features before getting to a real box. Users can test the functionality and design an environment even before they have a real appliance which makes the deployment process much more efficient. With OpenStack this is especially nice because having a simulator on the other end allows us to test the complete set of the Cinder plugin and check the entire integration on a single server or even a laptop. Let’s see how this works Installing and Configuring the Simulator To get started we first need to download the simulator, the simulator is available here, unzip it and it is ready to be imported to VirtualBox. If you do not already have VirtualBox installed you can download it from here according to your platform of choice. To import the simulator go to VirtualBox console File -> Import Appliance , navigate to the location of the simulator and import the virtual machine. When opening the virtual machine you will need to make the following changes: - Network – by default the network is “Host Only” , the user needs to change that to “Bridged” so the VM can connect to the network and be accessible. - Memory (optional) – the VM comes with a default of 2560MB which may be fine but if you have more memory that could not hurt, in my case I decided to give it 8192 - vCPU (optional) – the default the VM comes with 1 vCPU, I decided to change it to two, you are welcome to do so too. And here is how the VM looks like: Start the VM, when the boot process completes we will need to change the root password and the simulator is running and ready to go. Now that the simulator is up and running we can access simulated appliance using the URL https://<IP or DNS name>:215/, the IP is showing on the virtual machine console. At this stage we will need to configure the appliance, in my case I did not change any of the default (in other words pressed ‘commit’ several times) and the simulated appliance was configured and ready to go. We will need to enable REST access otherwise Cinder will not be able to call the appliance we do that in Configuration->Services and at the end of the page there is ‘REST’ button, enable it. If you are a more advanced user you can set additional features in the appliance but for the purpose of this demo this is sufficient. One final step will be to create a pool, go to Configuration -> Storage and add a pool as shown below the pool is named “default”: The simulator is now running, configured and ready for action. Configuring Cinder Back to OpenStack, I have a multi node deployment which we created according to the “Getting Started with Oracle VM, Oracle Linux and OpenStack” guide using Icehouse tech preview release. Now we need to install and configure the ZFSSA Cinder plugin using the README file. In short the steps are as follows: 1. Copy the file from here to the control node and place them at: /usr/lib/python2.6/site-packages/cinder/volume/drivers/zfssa 2. Configure the plugin, editing /etc/cinder/cinder.conf # Driver to use for volume creation (string value) #volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver volume_driver=cinder.volume.drivers.zfssa.zfssaiscsi.ZFSSAISCSIDriver zfssa_host = <HOST IP> zfssa_auth_user = root zfssa_auth_password = <ROOT PASSWORD> zfssa_pool = default zfssa_target_portal = <HOST IP>:3260 zfssa_project = test zfssa_initiator_group = default zfssa_target_interfaces = e1000g0 3. Restart the cinder-volume service: service openstack-cinder-volume restart 4. Look into the log file, this will tell us if everything works well so far. If you see any errors fix them before continuing. 5. Install iscsi-initiator-utils package, this is important since the plugin uses iscsi commands from this package: yum install -y iscsi-initiator-utils The installation and configuration are very simple, we do not need to have a “project” in the ZFSSA but we do need to define a pool. Creating and Using Volumes in OpenStack We are now ready to work, to get started lets create a volume in OpenStack and see it showing up on the simulator: #  cinder create 2 --display-name my-volume-1 +---------------------+--------------------------------------+ |       Property      |                Value                 | +---------------------+--------------------------------------+ |     attachments     |                  []                  | |  availability_zone  |                 nova                 | |       bootable      |                false                 | |      created_at     |      2014-08-12T04:24:37.806752      | | display_description |                 None                 | |     display_name    |             my-volume-1              | |      encrypted      |                False                 | |          id         | df67c447-9a36-4887-a8ff-74178d5d06ee | |       metadata      |                  {}                  | |         size        |                  2                   | |     snapshot_id     |                 None                 | |     source_volid    |                 None                 | |        status       |               creating               | |     volume_type     |                 None                 | +---------------------+--------------------------------------+ In the simulator: Extending the volume to 5G: # cinder extend df67c447-9a36-4887-a8ff-74178d5d06ee 5 In the simulator: Creating templates using Cinder Volumes By default OpenStack supports ephemeral storage where an image is copied into the run area during instance launch and deleted when the instance is terminated. With Cinder we can create persistent storage and launch instances from a Cinder volume. Booting from volume has several advantages, one of the main advantages of booting from volumes is speed. No matter how large the volume is the launch operation is immediate there is no copying of an image to a run areas, an operation which can take a long time when using ephemeral storage (depending on image size). In this deployment we have a Glance image of Oracle Linux 6.5, I would like to make it into a volume which I can boot from. When creating a volume from an image we actually “download” the image into the volume and making the volume bootable, this process can take some time depending on the image size, during the download we will see the following status: # cinder create --image-id 487a0731-599a-499e-b0e2-5d9b20201f0f --display-name ol65 2 # cinder list +--------------------------------------+-------------+--------------+------+-------------+ |                  ID                  |    Status   | Display Name | Size | Volume Type | … +--------------------------------------+-------------+--------------+------+------------- | df67c447-9a36-4887-a8ff-74178d5d06ee |  available  | my-volume-1  |  5   |     None    | … | f61702b6-4204-4f10-8bdf-7da792f15c28 | downloading |     ol65     |  2   |     None    | … +--------------------------------------+-------------+--------------+------+-------------+ After the download is complete we will see that the volume status changed to “available” and that the bootable state is “true”. We can use this new volume to boot an instance from or we can use it as a template. Cinder can create a volume from another volume and ZFSSA can replicate volumes instantly in the back end. The result is an efficient template model where users can spawn an instance from a “template” instantly even if the template is very large in size. Let’s try replicating the bootable volume with the Oracle Linux 6.5 on it creating additional 3 bootable volumes: # cinder create 2 --source-volid f61702b6-4204-4f10-8bdf-7da792f15c28 --display-name ol65-bootable-1 # cinder create 2 --source-volid f61702b6-4204-4f10-8bdf-7da792f15c28 --display-name ol65-bootable-2 # cinder create 2 --source-volid f61702b6-4204-4f10-8bdf-7da792f15c28 --display-name ol65-bootable-3 # cinder list +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ |                  ID                  |   Status  |   Display Name  | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ | 9bfe0deb-b9c7-4d97-8522-1354fc533c26 | available | ol65-bootable-2 |  2   |     None    |   true   |             | | a311a855-6fb8-472d-b091-4d9703ef6b9a | available | ol65-bootable-1 |  2   |     None    |   true   |             | | df67c447-9a36-4887-a8ff-74178d5d06ee | available |   my-volume-1   |  5   |     None    |  false   |             | | e7fbd2eb-e726-452b-9a88-b5eee0736175 | available | ol65-bootable-3 |  2   |     None    |   true   |             | | f61702b6-4204-4f10-8bdf-7da792f15c28 | available |       ol65      |  2   |     None    |   true   |             | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ Note that the creation of those 3 volume was almost immediate, no need to download or copy, ZFSSA takes care of the volume copy for us. Start 3 instances: # nova boot --boot-volume a311a855-6fb8-472d-b091-4d9703ef6b9a --flavor m1.tiny ol65-instance-1 --nic net-id=25b19746-3aea-4236-8193-4c6284e76eca # nova boot --boot-volume 9bfe0deb-b9c7-4d97-8522-1354fc533c26 --flavor m1.tiny ol65-instance-2 --nic net-id=25b19746-3aea-4236-8193-4c6284e76eca # nova boot --boot-volume e7fbd2eb-e726-452b-9a88-b5eee0736175 --flavor m1.tiny ol65-instance-3 --nic net-id=25b19746-3aea-4236-8193-4c6284e76eca Instantly replicating volumes is a very powerful feature, especially for large templates. The ZFSSA Cinder plugin allows us to take advantage of this feature of ZFSSA. By offloading some of the operations to the array OpenStack create a highly efficient environment where persistent volume can be instantly created from a template. That’s all for now, with this environment you can continue to test ZFSSA with OpenStack and when you are ready for the real appliance the operations will look the same. @RonenKofman

    Read the article

  • How Can I Check an Object to See its Type and Return A Casted Object

    - by Russ Bradberry
    I have method to which I pass an object. In this method I check it's type and depending on the type I do something with it and return a Long. I have tried every which way I can think of to do this and I always get several compiler errors telling me it expects a certain object but gets another. Can someone please explain to me what I am doing wrong and guide me in the right direction? What I have tried thus far is below: override def getInteger(obj:Object) = { if (obj.isInstanceOf[Object]) null else if (obj.isInstanceOf[Number]) (obj:Number).longValue() else if (obj.isInstanceOf[Boolean]) if (obj:Boolean) 1 else 0 else if (obj.isInstanceOf[String]) if ((obj:String).length == 0 | (obj:String) == "null") null else try { Long.parse(obj:String) } catch { case e: Exception => throw new ValueConverterException("value \"" + obj.toString() + "\" of type " + obj.getClass().getName() + " is not convertible to Long") } }

    Read the article

  • Providing multi-version databases for backward compatibility for production applications/databases.

    - by JavaRocky
    How can I manage multiple versions of a database easily? I have some data (as views as selects for data originating in tables from other schemas), which other database may reference using various means including database synonyms & links. I wish to provide a sort of interface/guarantee in-case future for applications/databases which use this data. All of this is for in the event i need to update the views for correctness or applicability inside my database. How can i achieve this in a maintained, controlled and easy way? I am using Oracle 10g if that matters.

    Read the article

  • rdoc and the "--accessor" option

    - by Brian Ploetz
    rdoc --help says: --accessor, -A accessorname[,..] comma separated list of additional class methods that should be treated like 'attr_reader' and friends. Option may be repeated. Each accessorname may have '=text' appended, in which case that text appears where the r/w/rw appears for normal accessors. Does anyone have any working examples of doing this (both the accessor method definition and the rdoc command invocation)? No matter what combination I try, my accessors will not show up in the RDoc output. Thanks.

    Read the article

  • OpenJPA - not flushing before queries

    - by Ales
    hi all, how can i set openjpa to flush before query. When i change some values in database i want to propagate these changes into application. I tried this settings in persistence.xml: <property name="openjpa.FlushBeforeQueries" value="true" /> <property name="openjpa.IgnoreChanges" value="false"/> false/true - same behavior to my case <property name="openjpa.DataCache" value="false"/> <property name="openjpa.RemoteCommitProvider" value="sjvm"/> <property name="openjpa.ConnectionRetainMode" value="always"/> <property name="openjpa.QueryCache" value="false"/> Any idea? Thanks

    Read the article

< Previous Page | 693 694 695 696 697 698 699 700 701 702 703 704  | Next Page >