Search Results

Search found 91614 results on 3665 pages for 'new developer'.

Page 709/3665 | < Previous Page | 705 706 707 708 709 710 711 712 713 714 715 716  | Next Page >

  • Creating properly aligned partitions on a replacement disk

    - by Marius Gedminas
    I've a typical small office server with two hard disks configured for RAID-1 (mirroring). Each disk has several partitions: one for swap, the others paired in several /dev/mdX arrays. Every couple of years one of the disks dies and is replaced. The replacement typically goes something like this: # copy partition table from the remaining good disk to the empty replacement disk # (instead of /dev/good_disk and /dev/new_disk I use /dev/sda and /dev/sdb, as appropriate) sfdisk -d /dev/good_disk | sfdisk /dev/new_disk # install boot loader grub-install /dev/new_disk # create swap partition reusing the same UUID, so I don't need to edit /etc/fstab mkswap /dev/new_disk1 -U xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx # hot-add the new partitions to my RAID arrays mdadm /dev/md0 -a /dev/new_disk2 mdadm /dev/md1 -a /dev/new_disk5 mdadm /dev/md2 -a /dev/new_disk6 mdadm /dev/md3 -a /dev/new_disk7 mdadm /dev/md4 -a /dev/new_disk8 The disks were originally partitioned with cfdisk back in 2009, and so the partition table is aligned traditionally to cylinder boundaries (255 heads * 63 sectors). This is not the optimum configuration for new 4K-sector drives. My question is: how can I create a set of partitions for the new disk and ensure they're properly aligned, and have correct sizes for my RAID arrays (rounding up is acceptable, I suppose, but rounding down is definitely not)?

    Read the article

  • Bind listbox in WPF with grouping

    - by Michael Stoll
    Hi, I've got a collection of ViewModels and want to bind a ListBox to them. Doing some investigation I found this. So my ViewModel look like this (Pseudo Code) interface IItemViewModel { string DisplayName { get; } } class AViewModel : IItemViewModel { string DisplayName { return "foo"; } } class BViewModel : IItemViewModel { string DisplayName { return "foo"; } } class ParentViewModel { IEnumerable<IItemViewModel> Items { get { return new IItemViewModel[] { new AItemViewModel(), new BItemViewModel() } } } } class GroupViewModel { static readonly GroupViewModel GroupA = new GroupViewModel(0); static readonly GroupViewModel GroupB = new GroupViewModel(1); int GroupIndex; GroupViewModel(int groupIndex) { this.GroupIndex = groupIndex; } string DisplayName { get { return "This is group " + GroupIndex.ToString(); } } } class ItemGroupTypeConverter : IValueConverter { object Convert(object value, Type targetType, object parameter, CultureInfo culture) { if (value is AItemViewModel) return GroupViewModel.GroupA; else return GroupViewModel.GroupB; } } And this XAML <UserControl.Resources> <vm:ItemsGroupTypeConverter x:Key="ItemsGroupTypeConverter "/> <CollectionViewSource x:Key="GroupedItems" Source="{Binding Items}"> <CollectionViewSource.GroupDescriptions> <PropertyGroupDescription Converter="{StaticResource ItemsGroupTypeConverter }"/> </CollectionViewSource.GroupDescriptions> </CollectionViewSource> </UserControl.Resources> <ListBox ItemsSource="{Binding Source={StaticResource GroupedItems}}"> <ListBox.GroupStyle> <GroupStyle> <GroupStyle.HeaderTemplate> <DataTemplate> <TextBlock Text="{Binding DisplayName}" FontWeight="bold" /> </DataTemplate> </GroupStyle.HeaderTemplate> </GroupStyle> </ListBox.GroupStyle> <ListBox.ItemTemplate> <DataTemplate> <TextBlock Text="{Binding DisplayName}" /> </DataTemplate> </ListBox.ItemTemplate> </ListBox> This works somehow, exept of the fact that the binding of the HeaderTemplate does not work. Anyhow I'd prefer omitting the TypeConverter and the CollectionViewSource. Isn't there a way to use a property of the ViewModel for Grouping? I know that in this sample scenario it would be easy to replace the GroupViewModel with a string an have it working, but that's not an option. So how can I bind the HeaderTemplate to the GroupViewModel?

    Read the article

  • Tab Sweep - Upgrade to Java EE 6, Groovy NetBeans, JSR310, JCache interview, OEPE, and more

    - by alexismp
    Recent Tips and News on Java, Java EE 6, GlassFish & more : • Implementing JSR 310 (New Date/Time API) in Java 8 Is Very Strongly Favored by Developers (java.net) • Upgrading To The Java EE 6 Web Profile (Roger) • NetBeans for Groovy (blogs.oracle.com) • Client Side MOXy JSON Binding Explained (Blaise) • Control CDI Containers in SE and EE (Strub) • Java EE on Google App Engine: CDI to the Rescue - Aleš Justin (jaxenter) • The Java EE 6 Example - Testing Galleria - Part 4 (Markus) • Why is OpenWebBeans so fast? (Strub) • Welcome to the new Oracle Enterprise Pack for Eclipse Blog (blogs.oracle.com) • Java Spotlight Episode 75: Greg Luck on JSR 107 Java Temporary Caching API (Spotlight Podcast) • Glassfish cluster installation and administration on top of SSH + public key (Paulo) • Jfokus 2012 on Parleys.com (Parleys) • Java Tuning in a Nutshell - Part 1 (Rupesh) • New Features in Fork/Join from Java Concurrency Master, Doug Lea (DZone) • A Java7 Grammar for VisualLangLab (Sanjay) • Glassfish version 3.1.2: Secure Admin must be enabled to access the DAS remotely (Charlee) • Oracle Announces the Certification of the Oracle Database on Oracle Linux 6 and Red Hat Enterprise Linux 6

    Read the article

  • Clipping polygons in XNA with stencil (not using spritebatch)

    - by Blau
    The problem... i'm drawing polygons, in this case boxes, and i want clip children polygons with its parent's client area. // Class Region public void Render(GraphicsDevice Device, Camera Camera) { int StencilLevel = 0; Device.Clear( ClearOptions.Stencil, Vector4.Zero, 0, StencilLevel ); Render( Device, Camera, StencilLevel ); } private void Render(GraphicsDevice Device, Camera Camera, int StencilLevel) { Device.SamplerStates[0] = this.SamplerState; Device.Textures[0] = this.Texture; Device.RasterizerState = RasterizerState.CullNone; Device.BlendState = BlendState.AlphaBlend; Device.DepthStencilState = DepthStencilState.Default; Effect.Prepare(this, Camera ); Device.DepthStencilState = GlobalContext.GraphicsStates.IncMask; Device.ReferenceStencil = StencilLevel; foreach ( EffectPass pass in Effect.Techniques[Technique].Passes ) { pass.Apply( ); Device.DrawUserIndexedPrimitives<VertexPositionColorTexture>( PrimitiveType.TriangleList, VertexData, 0, VertexData.Length, IndexData, 0, PrimitiveCount ); } foreach ( Region child in ChildrenRegions ) { child.Render( Device, Camera, StencilLevel + 1 ); } Effect.Prepare( this, Camera ); // This does not works Device.BlendState = GlobalContext.GraphicsStates.NoWriteColor; Device.DepthStencilState = GlobalContext.GraphicsStates.DecMask; Device.ReferenceStencil = StencilLevel; // This should be +1, but in that case the last drrawed is blue and overlap all foreach ( EffectPass pass in Effect.Techniques[Technique].Passes ) { pass.Apply( ); Device.DrawUserIndexedPrimitives<VertexPositionColorTexture>( PrimitiveType.TriangleList, VertexData, 0, VertexData.Length, IndexData, 0, PrimitiveCount ); } } public static class GraphicsStates { public static BlendState NoWriteColor = new BlendState( ) { ColorSourceBlend = Blend.One, AlphaSourceBlend = Blend.One, ColorDestinationBlend = Blend.InverseSourceAlpha, AlphaDestinationBlend = Blend.InverseSourceAlpha, ColorWriteChannels1 = ColorWriteChannels.None }; public static DepthStencilState IncMask = new DepthStencilState( ) { StencilEnable = true, StencilFunction = CompareFunction.Equal, StencilPass = StencilOperation.IncrementSaturation, }; public static DepthStencilState DecMask = new DepthStencilState( ) { StencilEnable = true, StencilFunction = CompareFunction.Equal, StencilPass = StencilOperation.DecrementSaturation, }; } How can achieve this? EDIT: I've just relized that the NoWriteColors.ColorWriteChannels1 should be NoWriteColors.ColorWriteChannels. :) Now it's clipping right. Any other approach?

    Read the article

  • SSMS Tools Pack 1.9.3 is out!

    - by Mladen Prajdic
    This release adds a great new feature and fixes a few bugs. The new feature called Window Content History saves the whole text in all all opened SQL windows every N minutes with the default being 30 minutes. This feature fixes the shortcoming of the Query Execution History which is saved only when the query is run. If you're working on a large script and never execute it, the existing Query Execution History wouldn't save it. By contrast the Window Content History saves everything in a .sql file so you can even open it in your SSMS. The Query Execution History and Window Content History files are correlated by the same directory and file name so when you search through the Query Execution History you get to see the whole saved Window Content History for that query. Because Window Content History saves data in simple searchable .sql files there isn't a special search editor built in. It is turned ON by default but despite the built in optimizations for space minimization, be careful to not let it fill your disk. You can see how it looks in the pictures in the feature list. The fixed bugs are: SSMS 2008 R2 slowness reported by few people. An object explorer context menu bug where it showed multiple SSMS Tools entries and showed wrong entries for a node. A datagrid bug in SQL snippets. Ability to read illegal XML characters from log files. Fixed the upper limit bug of a saved history text to 5 MB. A bug when searching through result sets prevents search. A bug with Text formatting erroring out for certain scripts. A bug with finding servers where it would return null even though servers existed. Run custom scripts objects had a bug where |SchemaName| didn't display the correct table schema for columns. This is fixed. Also |NodeName| and |ObjectName| values now show the same thing.   You can download the new version 1.9.3 here. Enjoy it!

    Read the article

  • How to restrict paddle movement using Farseer Physics engine 3.2

    - by brainydexter
    I am new to using Farseer Physics Engine 3.2(FPE), so please bear with my questions. Also, since FPE 3.2 is based on Box2D, I have been reading Box2D manual and pieces of code scattered in samples to better understand terminology and usage. Pong is usually my testbed whenever I try to do something new. Here is one of the issue I am running into: How can I restrict paddles to move only along Y axis, because the ball comes in and knocks off the paddles and everything floats in space afterwards ? (Box = Rectangle and ball = circle) I know MKS is the unit system, but is there a recommendation for sizes/position to be used ? I know this is a very generic question, but it would be good to know a simple set of values that one could use for making a game as simple as pong. Between box2d and FPE, I have some doubts: what is the recommended way of making a body in FPE ? world.CreateBody() does not exist in FPE Box2d manual recommends never to "new" body(since Box2D uses Small Object allocators), so is there a recommended way in Farseer to create a body (apart from factories) ? In box2d, it is recommended to keep a track of the body object, since it is also the parent to fixture(s). Why is it that in most of the examples, the fixture object is tracked ? Is there a reason why body is not tracked ? Thanks

    Read the article

  • [MINI HOW-TO] Remove the Search Helper Extension from Firefox

    - by Asian Angel
    If you found a new surprise extension added to Firefox after the June Patch from Microsoft, then you are likely to be rather unhappy right now. Join us as we show you how to remove the Search Helper extension from your browser. An Unexpected Addition to Your Extensions You may be wondering what the new mysterious extension that showed up is for. Its’ purpose is to help the Bing Toolbar better integrate with your browser. Unless you have the Bing Toolbar installed you really do not need this cluttering your browser up. So how do you get rid of it? Removing the Extension In order to remove the extension you will need to navigate to the following location: C:\Program Files\Microsoft\Search Enhancement Pack\Search Helper Once there delete the “firefoxextension folder”…that is all there is to it. If you want to remove the search helper add-on for Internet Explorer then delete the “SEPsearchhelperie.dll file” while you are here. Note: You may need to have administrator rights in order to delete the folder. No more Search Helper Extension! If you are unhappy about this update being snuck into your system, following these instructions will remove it. Microsoft Support Page About Update KB982217 Similar Articles Productive Geek Tips Remove the New Tab Button in FirefoxAdd Search Forms to the Firefox Search BarAdd Notes to Zoho Notebook in FirefoxOrganize Your Firefox Search Engines Into FoldersManually Remove Skype Extension from Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Steve Jobs’ iPhone 4 Keynote Video Watch World Cup Online On These Sites Speed Up Windows With ReadyBoost Awesome World Cup Soccer Calendar Nice Websites To Watch TV Shows Online 24 Million Sites

    Read the article

  • May In Review

    - by Richard Bingham
    Content Highlights Our Application Composer series had fresh articles on the related internal data model and lots more on Groovy, including how to manipulate your data and a useful table showing you when and where groovy scripting can be used. For those just getting started with Fusion Applications user security, this article gave some handy examples to get your going. Jani's Java Cloud Service series continued strongly, with examples of integration using ADFbc, a Web Service Proxy client, and the ADF Data Control. From Other Teams The Oracle A-Team provided a broad set of articles during May, with various topics related to Fusion Applications including patching and performance management for on-premises deployments, and generic content on both integration and data extraction via web services. As part of their presentation to Oracle Israel User Group, our AppsUX colleagues explained the fresh new type of interface to Oracle Sales, through the voice mobile apps. This was in addition to demonstrations of the newer Release 8 Simplified UI customization options. Finally Angelo, our colleague in Platform Technical Services, explained in his blog how to use the findCriteria element included in all Oracle Sales web services to reduce the data returned, making response payloads much more specific, lightweight and therefore usable. Events and Announcements Oliver explained in this post about the new set of code samples on OTN for extending Sales Cloud using Oracle Platform as a Service. In addition, a new set of cloud developer documentation was released to provide more guided-learning on extending Oracle Sales Cloud with Oracle Platform as a Service (PaaS) services. This illustrative content is mainly as downloadable PDFs, and include topics covering sales cloud extensibility basics, using web services from JDeveloper (including security), using PaaS for SaaS development, and ADF (including mobile).

    Read the article

  • What are the best practices for phasing out obsolete code?

    - by P.Brian.Mackey
    I have the need to phase out an obsolete method. I am aware of the [Obsolete] attribute. Does Microsoft have a recommended best practice guide for doing this? Here's my current plan: A. I do not want to create a new assembly because developers would have to add a new reference to their projects and I expect to get a lot of grief from my boss and co-workers if they must do this. We also do not maintain multiple assembly versions. We only use the latest version. Changing this practice would require changing our deployment process which is a big issue (have to teach people how to do things with TFS instead of FinalBuilder and get them to give up FinalBuilder) B. Mark the old method obsolete. C. Because the implementation is changing (not the method signature), I need to rename the method rather than create an overload. So, to make users aware of the proper method I plan to add a message to the [Obsolete] attribute. This part bothers me, because the only change I'm making is decoupling the method from the connection string. But, because I'm not adding a new assembly, I see no way around this. Result: [Obsolete("Please don't use this anymore because it does not implement IMyDbProvider. Use XXX instead.")]; /// <summary> /// /// </summary> /// <param name="settingName"></param> /// <returns></returns> public static Dictionary<string, Setting> ReadSettings(string settingName) { return ReadSettings(settingName, SomeGeneralClass.ConnectionString); } public Dictionary<string, Setting> ReadSettings2(string settingName) { return ReadSettings(settingName);// IMyDbProvider.ConnectionString private member added to class. Probably have to make this an instance method. }

    Read the article

  • XNA Easy Storage XBOX 360 High Scores

    - by user1003211
    To followup from a previous query - I need some help with the implementation of easystorage high scores, which is bringing up some errors on the xbox. I get the prompt screen, a savedevice is selected and a file are all created! However the file remains empty, (I've tried prepopulating but still get errors). The full portions of the scoring code can be found here: http://pastebin.com/74v897Yt The current issue in particular is in LoadHighScores() - "There is an error in XML document (0, 0)." under line data = (HighScoreData)serializer.Deserialize(stream); I'm not sure whether this line is correct either: HighScoreData data = new HighScoreData(); public static HighScoreData LoadHighScores(string container, string filename) { HighScoreData data = new HighScoreData(); if (Global.SaveDevice.FileExists(container, filename)) { Global.SaveDevice.Load(container, filename, stream => { File.Open(Global.fileName_options, FileMode.OpenOrCreate, FileAccess.Read); try { // Read the data from the file XmlSerializer serializer = new XmlSerializer(typeof(HighScoreData)); data = (HighScoreData)serializer.Deserialize(stream); } finally { // Close the file stream.Close(); // stream.Dispose(); } }); } return (data); } I call: PromptMe(); when the Start button is pressed at the beginning. I call: if (Global.SaveDevice.IsReady){entries = LoadHighScores(HighScoresContainer, HighScoresFilename);} during the menu screen to try and display the highscore screen. I call: SaveHighScore(); when game ends. I've tried altering the struct code to a class but still no luck. Any help greatly appreciated.

    Read the article

  • Policy Administration is the Top 2011 IT Priority for Insurers

    - by helen.pitts(at)oracle.com
    The current issue of Insurance Networking News includes an interesting column by Novarica's Matt Josefowicz.  Recent research by the firm revealed that policy administration replacement or extension is the most common strategic IT project for insurers this year.  The article goes on to note that insurers are keenly focused on the business capabilities that can be delivered once the system is in production as well as the ability to leverage agile development methodologies and true business/IT collaboration during implementation. The results are not too surprising given that policy administration is a mission-critical system for life and annuity insurers.  As Josefowicz notes, "Core systems are called core for a reason--they are at the heart of the insurer's ability to function.  Replacing them is not to be done lightly, but failing to replace them can mean diminishing the ability to compete or function effectively as a company." Insurers can no longer rely on inflexible policy administration systems that impede their ability to rapidly configure and bring to innovative new products, add riders, support changing business processes and take advantage of market opportunities.  The ability to leverage the policy administration systems to better service customers and distribution channels by providing real-time access to policy information throughout the policy lifecycle is also critical to sustain loyalty and further fuel growth.Insurers can benefit from a modern, adaptive policy administration system, like Oracle Insurance Policy Administration for Life and Annuity.  You can learn more about the industry's most highly advanced, rules-based system, which is unmatched for its highly flexible, rules-based configurability, performance and extensibility, as well as global market industry trends by viewing a complimentary, on-demand Webcast, Adapt, Transform and Grow:  Accelerate Speed to Market with Adaptive Insurance Policy Administration.Data conversions can be a daunting process for many insurers when deciding to modernize, in particular when consolidating from multiple, disparate legacy policy administration systems to a single new platform.  Migrating from a legacy system requires a well-thought out approach that builds on the industry's best thinking from previous modernization efforts and takes data migration off the critical path by leveraging proven methodology and tools to capitalize on the new system's capabilities.  We'll discuss more about this approach in a future Oracle Insurance blog.Helen Pitts is senior product marketing manager for Oracle Insurance's life and annuities solutions.

    Read the article

  • How do you prefer to handle image spriting in your web projects?

    - by Macy Abbey
    It seems like these days it is pretty much mandatory for web applications to sprite images if they want many images on their site AND a fast load time. (Spriting is the process of combining all images referenced from a style sheet into one/few image(s) with each reference containing a different background position.) I was wondering what method of implementing sprites you all prefer in your web applications, given that we are referring to non-dynamic images which are included/designed by the programming team and not images which are dynamically uploaded by a third party. 1. Add new images to an existing sprite by hand, create new css reference by hand. 2. Generate a sprite server-side from your css files which all reference single images set to be background images of an html element that is the same size of the image you are spriting once per build and update all css references programmatically. 3. Use a sprite generating program to generate a sprite image for you once per release and hand insert the new css class / image into your project. 4. Other methods? I prefer two as it requires very little hand-coding and image editing.

    Read the article

  • Table Variables: an empirical approach.

    - by Phil Factor
    It isn’t entirely a pleasant experience to publish an article only to have it described on Twitter as ‘Horrible’, and to have it criticized on the MVP forum. When this happened to me in the aftermath of publishing my article on Temporary tables recently, I was taken aback, because these critics were experts whose views I respect. What was my crime? It was, I think, to suggest that, despite the obvious quirks, it was best to use Table Variables as a first choice, and to use local Temporary Tables if you hit problems due to these quirks, or if you were doing complex joins using a large number of rows. What are these quirks? Well, table variables have advantages if they are used sensibly, but this requires some awareness by the developer about the potential hazards and how to avoid them. You can be hit by a badly-performing join involving a table variable. Table Variables are a compromise, and this compromise doesn’t always work out well. Explicit indexes aren’t allowed on Table Variables, so one cannot use covering indexes or non-unique indexes. The query optimizer has to make assumptions about the data rather than using column distribution statistics when a table variable is involved in a join, because there aren’t any column-based distribution statistics on a table variable. It assumes a reasonably even distribution of data, and is likely to have little idea of the number of rows in the table variables that are involved in queries. However complex the heuristics that are used might be in determining the best way of executing a SQL query, and they most certainly are, the Query Optimizer is likely to fail occasionally with table variables, under certain circumstances, and produce a Query Execution Plan that is frightful. The experienced developer or DBA will be on the lookout for this sort of problem. In this blog, I’ll be expanding on some of the tests I used when writing my article to illustrate the quirks, and include a subsequent example supplied by Kevin Boles. A simplified example. We’ll start out by illustrating a simple example that shows some of these characteristics. We’ll create two tables filled with random numbers and then see how many matches we get between the two tables. We’ll forget indexes altogether for this example, and use heaps. We’ll try the same Join with two table variables, two table variables with OPTION (RECOMPILE) in the JOIN clause, and with two temporary tables. It is all a bit jerky because of the granularity of the timing that isn’t actually happening at the millisecond level (I used DATETIME). However, you’ll see that the table variable is outperforming the local temporary table up to 10,000 rows. Actually, even without a use of the OPTION (RECOMPILE) hint, it is doing well. What happens when your table size increases? The table variable is, from around 30,000 rows, locked into a very bad execution plan unless you use OPTION (RECOMPILE) to provide the Query Analyser with a decent estimation of the size of the table. However, if it has the OPTION (RECOMPILE), then it is smokin’. Well, up to 120,000 rows, at least. It is performing better than a Temporary table, and in a good linear fashion. What about mixed table joins, where you are joining a temporary table to a table variable? You’d probably expect that the query analyzer would throw up its hands and produce a bad execution plan as if it were a table variable. After all, it knows nothing about the statistics in one of the tables so how could it do any better? Well, it behaves as if it were doing a recompile. And an explicit recompile adds no value at all. (we just go up to 45000 rows since we know the bigger picture now)   Now, if you were new to this, you might be tempted to start drawing conclusions. Beware! We’re dealing with a very complex beast: the Query Optimizer. It can come up with surprises What if we change the query very slightly to insert the results into a Table Variable? We change nothing else and just measure the execution time of the statement as before. Suddenly, the table variable isn’t looking so much better, even taking into account the time involved in doing the table insert. OK, if you haven’t used OPTION (RECOMPILE) then you’re toast. Otherwise, there isn’t much in it between the Table variable and the temporary table. The table variable is faster up to 8000 rows and then not much in it up to 100,000 rows. Past the 8000 row mark, we’ve lost the advantage of the table variable’s speed. Any general rule you may be formulating has just gone for a walk. What we can conclude from this experiment is that if you join two table variables, and can’t use constraints, you’re going to need that Option (RECOMPILE) hint. Count Dracula and the Horror Join. These tables of integers provide a rather unreal example, so let’s try a rather different example, and get stuck into some implicit indexing, by using constraints. What unusual words are contained in the book ‘Dracula’ by Bram Stoker? Here we get a table of all the common words in the English language (60,387 of them) and put them in a table. We put them in a Table Variable with the word as a primary key, a Table Variable Heap and a Table Variable with a primary key. We then take all the distinct words used in the book ‘Dracula’ (7,558 of them). We then create a table variable and insert into it all those uncommon words that are in ‘Dracula’. i.e. all the words in Dracula that aren’t matched in the list of common words. To do this we use a left outer join, where the right-hand value is null. The results show a huge variation, between the sublime and the gorblimey. If both tables contain a Primary Key on the columns we join on, and both are Table Variables, it took 33 Ms. If one table contains a Primary Key, and the other is a heap, and both are Table Variables, it took 46 Ms. If both Table Variables use a unique constraint, then the query takes 36 Ms. If neither table contains a Primary Key and both are Table Variables, it took 116383 Ms. Yes, nearly two minutes!! If both tables contain a Primary Key, one is a Table Variables and the other is a temporary table, it took 113 Ms. If one table contains a Primary Key, and both are Temporary Tables, it took 56 Ms.If both tables are temporary tables and both have primary keys, it took 46 Ms. Here we see table variables which are joined on their primary key again enjoying a  slight performance advantage over temporary tables. Where both tables are table variables and both are heaps, the query suddenly takes nearly two minutes! So what if you have two heaps and you use option Recompile? If you take the rogue query and add the hint, then suddenly, the query drops its time down to 76 Ms. If you add unique indexes, then you've done even better, down to half that time. Here are the text execution plans.So where have we got to? Without drilling down into the minutiae of the execution plans we can begin to create a hypothesis. If you are using table variables, and your tables are relatively small, they are faster than temporary tables, but as the number of rows increases you need to do one of two things: either you need to have a primary key on the column you are using to join on, or else you need to use option (RECOMPILE) If you try to execute a query that is a join, and both tables are table variable heaps, you are asking for trouble, well- slow queries, unless you give the table hint once the number of rows has risen past a point (30,000 in our first example, but this varies considerably according to context). Kevin’s Skew In describing the table-size, I used the term ‘relatively small’. Kevin Boles produced an interesting case where a single-row table variable produces a very poor execution plan when joined to a very, very skewed table. In the original, pasted into my article as a comment, a column consisted of 100000 rows in which the key column was one number (1) . To this was added eight rows with sequential numbers up to 9. When this was joined to a single-tow Table Variable with a key of 2 it produced a bad plan. This problem is unlikely to occur in real usage, and the Query Optimiser team probably never set up a test for it. Actually, the skew can be slightly less extreme than Kevin made it. The following test showed that once the table had 54 sequential rows in the table, then it adopted exactly the same execution plan as for the temporary table and then all was well. Undeniably, real data does occasionally cause problems to the performance of joins in Table Variables due to the extreme skew of the distribution. We've all experienced Perfectly Poisonous Table Variables in real live data. As in Kevin’s example, indexes merely make matters worse, and the OPTION (RECOMPILE) trick does nothing to help. In this case, there is no option but to use a temporary table. However, one has to note that once the slight de-skew had taken place, then the plans were identical across a huge range. Conclusions Where you need to hold intermediate results as part of a process, Table Variables offer a good alternative to temporary tables when used wisely. They can perform faster than a temporary table when the number of rows is not great. For some processing with huge tables, they can perform well when only a clustered index is required, and when the nature of the processing makes an index seek very effective. Table Variables are scoped to the batch or procedure and are unlikely to hang about in the TempDB when they are no longer required. They require no explicit cleanup. Where the number of rows in the table is moderate, you can even use them in joins as ‘Heaps’, unindexed. Beware, however, since, as the number of rows increase, joins on Table Variable heaps can easily become saddled by very poor execution plans, and this must be cured either by adding constraints (UNIQUE or PRIMARY KEY) or by adding the OPTION (RECOMPILE) hint if this is impossible. Occasionally, the way that the data is distributed prevents the efficient use of Table Variables, and this will require using a temporary table instead. Tables Variables require some awareness by the developer about the potential hazards and how to avoid them. If you are not prepared to do any performance monitoring of your code or fine-tuning, and just want to pummel out stuff that ‘just runs’ without considering namby-pamby stuff such as indexes, then stick to Temporary tables. If you are likely to slosh about large numbers of rows in temporary tables without considering the niceties of processing just what is required and no more, then temporary tables provide a safer and less fragile means-to-an-end for you.

    Read the article

  • First Look - Oracle Data Mining

    - by kimberly.billings
    In his blog, JT on EDM, James Taylor shares his analysis of Oracle Data Mining, including its new GUI and Exadata integration. While Oracle Data Mining has been available for a while, it is now easier to access and try via the Amazon Cloud. Using the Oracle 11gR2 Data Mining Amazon Machine Image (AMI), you can launch an Oracle Data Mining-enabled instance directly through Amazon Web Services (AWS) and connect to it using the Oracle Data Miner graphical user interface. The new Oracle Data Mining GUI, which will be available to beta customers soon, provides more graphics, the ability to define, save and share analytical "work flows" to solve business problems, and provides more automation and simplicity. Taylor comments that, "the UI looks to have a nice look and feel including graphical model development flows, easy access to the data, nice little micro graphs when browsing data records and more." On using Oracle Data Mining with Exadata, Taylor writes, "Oracle says that the use of the ODM routines in the Exadata kernel is faster than running a native ODM model in the database by a factor of 2 and that this increases as more joins are used. This could mean that ODM outperforms even third party in-database analytics." Taylor concludes his blog with a positive overall review, stating that "ODM is a nice product for Oracle database customers and well worth looking into. The new UI will only make it more so." Read the blog. var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Securing a license key with RSA key.

    - by Jesse Knott
    Hello, it's late, I'm tired, and probably being quite dense.... I have written an application that I need to secure so it will only run on machines that I generate a key for. What I am doing for now is getting the BIOS serial number and generating a hash from that, I then am encrypting it using a XML RSA private key. I then sign the XML to ensure that it is not tampered with. I am trying to package the public key to decrypt and verify the signature with, but every time I try to execute the code as a different user than the one that generated the signature I get a failure on the signature. Most of my code is modified from sample code I have found since I am not as familiar with RSA encryption as I would like to be. Below is the code I was using and the code I thought I needed to use to get this working right... Any feedback would be greatly appreciated as I am quite lost at this point the original code I was working with was this, this code works fine as long as the user launching the program is the same one that signed the document originally... CspParameters cspParams = new CspParameters(); cspParams.KeyContainerName = "XML_DSIG_RSA_KEY"; cspParams.Flags = CspProviderFlags.UseMachineKeyStore; // Create a new RSA signing key and save it in the container. RSACryptoServiceProvider rsaKey = new RSACryptoServiceProvider(cspParams) { PersistKeyInCsp = true, }; This code is what I believe I should be doing but it's failing to verify the signature no matter what I do, regardless if it's the same user or a different one... RSACryptoServiceProvider rsaKey = new RSACryptoServiceProvider(); //Load the private key from xml file XmlDocument xmlPrivateKey = new XmlDocument(); xmlPrivateKey.Load("KeyPriv.xml"); rsaKey.FromXmlString(xmlPrivateKey.InnerXml); I believe this to have something to do with the key container name (Being a real dumbass here please excuse me) I am quite certain that this is the line that is both causing it to work in the first case and preventing it from working in the second case.... cspParams.KeyContainerName = "XML_DSIG_RSA_KEY"; Is there a way for me to sign/encrypt the XML with a private key when the application license is generated and then drop the public key in the app directory and use that to verify/decrypt the code? I can drop the encryption part if I can get the signature part working right. I was using it as a backup to obfuscate the origin of the license code I am keying from. Does any of this make sense? Am I a total dunce? Thanks for any help anyone can give me in this..

    Read the article

  • BDD for C# NUnit

    - by mjezzi
    I've been using a home brewed BDD Spec extension for writing BDD style tests in NUnit, and I wanted to see what everyone thought. Does it add value? Does is suck? If so why? Is there something better out there? Here's the source: https://github.com/mjezzi/NSpec There are two reasons I created this To make my tests easy to read. To produce a plain english output to review specs. Here's an example of how a test will look: -since zombies seem to be popular these days.. Given a Zombie, Peson, and IWeapon: namespace Project.Tests.PersonVsZombie { public class Zombie { } public interface IWeapon { void UseAgainst( Zombie zombie ); } public class Person { private IWeapon _weapon; public bool IsStillAlive { get; set; } public Person( IWeapon weapon ) { IsStillAlive = true; _weapon = weapon; } public void Attack( Zombie zombie ) { if( _weapon != null ) _weapon.UseAgainst( zombie ); else IsStillAlive = false; } } } And the NSpec styled tests: public class PersonAttacksZombieTests { [Test] public void When_a_person_with_a_weapon_attacks_a_zombie() { var zombie = new Zombie(); var weaponMock = new Mock<IWeapon>(); var person = new Person( weaponMock.Object ); person.Attack( zombie ); "It should use the weapon against the zombie".ProveBy( spec => weaponMock.Verify( x => x.UseAgainst( zombie ), spec ) ); "It should keep the person alive".ProveBy( spec => Assert.That( person.IsStillAlive, Is.True, spec ) ); } [Test] public void When_a_person_without_a_weapon_attacks_a_zombie() { var zombie = new Zombie(); var person = new Person( null ); person.Attack( zombie ); "It should cause the person to die".ProveBy( spec => Assert.That( person.IsStillAlive, Is.False, spec ) ); } } You'll get the Spec output in the output window: [PersonVsZombie] - PersonAttacksZombieTests When a person with a weapon attacks a zombie It should use the weapon against the zombie It should keep the person alive When a person without a weapon attacks a zombie It should cause the person to die 2 passed, 0 failed, 0 skipped, took 0.39 seconds (NUnit 2.5.5).

    Read the article

  • Windows 8&ndash;Custom WinRT components and WinJS

    - by Jonas Bush
    Wow, I’m still alive! I installed the RTM of Windows 8 when it became available, and in the last few days have started taking a look at writing a windows 8 app using HTML/JS, which in and of itself is a weird thing. I don’t think that windows developers of 10 years ago would’ve thought something like this would have ever come about. As I was working on this, I ran across a problem, found the solution, and thought I’d blog about it to try and kick start me back into blogging. I already answered my own question on Stack Overflow, but will explain here. I needed to create a custom WinRT component to do some stuff that I either wouldn’t be able to or didn’t know how to do with the javascript libraries available to me. I had a javascript class defined like this: WinJS.Namespace.define("MyApp", { MyClass: WinJS.Class.define(function() { //constructor function }, { /*instance members*/ }, { /*static members*/ }) }); This gives me an object I can access in javascript: var foo = new MyApp.MyClass(); I created my WinRT component like this: namespace MyApp { public sealed class SomeClass { public int SomeMethod() { return 42; } } }   With the thought that from my javascript, I’d be able to do this: var foo = new MyApp.MyClass(); var bar = new MyApp.SomeClass(); //from WinRT component foo.SomeProperty = bar.SomeMethod();   When I tried this, I got the following error when trying to construct MyApp.MyClass (the object defined in Javascript) 0x800a01bd - Javascript runtime error: Object doesn't support this action. I puzzled for a bit, then noticed while debugging that my “MyApp” namespace didn’t have anything in it other than the WinRT component. I changed my WinRT component to this: namespace MyAppUtils { public sealed class SomeClass { //etc } } And after this, everything was fine. So, lesson learned: If you’re using Javascript and create a custom WinRT component, make sure that the WinRT component is in a namespace all its own. Not sure why this happens, and if I find out why or if MS says something about this somewhere, I’ll come back and update this.

    Read the article

  • Learning Objective-C for iPad/iPhone/iPod Development

    - by Jeff Julian
    I am learning how to write apps for the iPad/iPhone/iPod!  Why, well several reasons.  One reason, I have 5 devices in my house on the platform.  I had an iPad and iPhone, Michelle has an iPhone, and each of the kids have iPod Touches.  They are excellent devices for life management, entertainment, and learning.  I am amazed at how well the kids pick up on it and how much it effects the way they learn.  My two year old knows how to use it better than any other device we own and she is learning new words and letters so quickly. Because of this saturation at home, it would be fun to write some apps my family could use.  Some games to bring the hobby of development back into my life.  Second reason is we want to have a Geekswithblogs app for the iPhone and iPad.  We are not sure if it is purely informational (blog posts and tweets) or if members want to be able to publish from the app.  Creating a blog editor would be tough stuff, but could be just the right challenge. There are so many more reasons, but the last one that really makes me excited is that it is a new domain of development where I get excited when I think about writing apps.  That excitement level where I want to see if there are User Groups and if we are just watching TV, to break out the MBP and start working on it.  That excitement level where I could really read a development book cover to cover and not just use as a reference.  I really do like this feeling. Who knows how long this will last and I am definitely not leaving .NET.  Microsoft software will always be my main focus, but for the time, my hobby is changing and I am getting excited about development again.   Technorati Tags: Apple,iPad Development,Objective-C,New Frontiers Image: Courtesy of Apple

    Read the article

  • Why Freezing when sending email?

    - by Outlaw Lemur
    So i have a kinect program which when it detects a human, it saves images of them and sends your email a notification email, the thing is that when it sends the email, it freezes and stops running, Why does it do this? Email Notification Code: void SendNotificationEmail() { string email = textBox1.Text; string message = "Someone has been detected in your house!\n Go to www.kinected.webs.com to view your photos now!!!!"; System.Net.Mail.MailMessage emailsend = new System.Net.Mail.MailMessage(); emailsend.To.Add(email); emailsend.Subject = "There is an Intruder In Your Home!"; emailsend.From = new System.Net.Mail.MailAddress("[email protected]"); emailsend.Body = message; System.Net.Mail.SmtpClient smtp = new System.Net.Mail.SmtpClient("smtp.mail.yahoo.com."); smtp.Send(emailsend); } When its supposed to fire: void nui_ColorFrameReady2(object sender, ImageFrameReadyEventArgs e) { // 32-bit per pixel, RGBA image xxx PlanarImage Image = e.ImageFrame.Image; //int deltaFrames = totalFrames - lastFrameWithMotion; //if (totalFrames2 <= stopFrameNumber & deltaFrames > 300) { ++totalFrames2; string bb1 = Convert.ToString(totalFrames2); // string file_name_3 = "C:\\Research\\Kinect\\Proposal\\Depth_Img" + bb1 + ".jpg"; xxx string file_name_4 = "C:\\temp\\Kinect1_Img" + bb1 + ".jpg"; video.Source = BitmapSource.Create( Image.Width, Image.Height, 96, 96, PixelFormats.Bgr32, null, Image.Bits, Image.Width * Image.BytesPerPixel); BitmapSource image4 = BitmapSource.Create( Image.Width, Image.Height, 96, 96, PixelFormats.Bgr32, null, Image.Bits, Image.Width * Image.BytesPerPixel); if (PersonDetected == 1) { if (totalFrames2 % 10 == 0) { image4.Save(file_name_4, Coding4Fun.Kinect.Wpf.ImageFormat.Jpeg); SendNotificationEmail(); PersonDetected = 0; // lastFrameWithMotion = totalFrames; // topFrameNumber += 100; } } } } Thanks for any help!

    Read the article

  • Offline backup synchronization

    - by Pavan Kumar
    There is a Central Server running Windows Server 2003 and SQL Server 2005 and there are 7 client machines situated in various places and has XP Pro & SQL Server 2005 installed in all of them. They are not interconnected so they are physically seperate. One person goes to each of these centers maybe twice a month and takes the backup (Full database consisting of mdf and ldf files) with a pen drive and brings it to the Central server which contains the central database holding same schema as all the other client databases. I need to synchronize each backup database (belonging to different centers) one by one to update the existing data or inserting new data in the central database . The solution i got was Replication. The pendrive is brought to central server consisting of 7 instances of the databases and then the databases is attached to the central server one by one to the same SQL Server where the central database exists. Then my idea was to replicate the backup database one by one i.e using single subscription (Central Database) and multiple publication ( i.e 7 instances of databases in my case) toplogy by performing replication locally (i.e in the same machine). So i tried to develop a UI in C# .Net to programatically run the Transactional Replication with push subscription using RMO Programming (which is incomplete as of now because there is no point in developing when you already know it is not the solution). Transactional Replication can either be set to initialize with a snapshot or without a snapshot. If i go for the first option i.e with a snapshot , the data whatever is present in Central Database is overwritten by the new data . So the data present initially in the central database is lost. If i try to initialize without snapshot , no data (the data already has the updated and new data) will be sent from the backup database to server. The replication will work in a scenario where any incremental changes is done only after you set the replication . So the initial data whatever was present in the backup database when setting up the replication will not be replicated when running the snapshot agent for the first time to synchronize. Only changes in the backup database thereafter will be reflected to the central database .(Remember I am not going to insert new data or make any changes to the backup database after i attach it to the Central Server. ) So this solution is not feasible. I want a solution for synchronizing from one client database to central database present in the same machine using C#.NET. If you can provide me small example maybe with two databases(with same schema) DB1(Client) to DB2(Server) consisting of one or two tables it will be very helpful. The synchronization is not bidirectional.I want to only update existing data or insert new data from DB1 to DB2 (DB2 may contain some data initially). Thanks and Regards Pavan

    Read the article

  • Facilitating XNA game deployments for non programmers

    - by Sal
    I'm currently working on an RPG, using the RPG starter kit from XNA as a base. (http://xbox.create.msdn.com/en-US/education/catalog/sample/roleplaying_game) I'm working with a small team (two designers and one music/sound artist), but I'm the only programmer. Currently, we're working with the following (unsustainable) system: the team creates new pics/sounds to add to the game, or they modify existing sounds/pics, then they commit their work to a repository, where we keep a current build of everything. (Code, images, sound, etc.) Every day or so, I create a new installer, reflecting the new images, code changes, and sound, and everyone installs it. My issue is this: I want to create a system where the rest of the team can replace the combat sounds, for instance, and they can immediately see the changes, without having to wait for me to build. The way XNA's setup, if I publish, it encodes all of the image and sound files, so the team can't "hot swap." I can set up Microsoft VS on everyone's machine and show them how to quickly publish, but I wanted to know if there was a simpler way of doing this. Has anyone come up against this when working with teams using XNA?

    Read the article

  • Facing Memory Leaks in AES Encryption Method.

    - by Mubashar Ahmad
    Can anyone please identify is there any possible memory leaks in following code. I have tried with .Net Memory Profiler and it says "CreateEncryptor" and some other functions are leaving unmanaged memory leaks as I have confirmed this using Performance Monitors. but there are already dispose, clear, close calls are placed wherever possible please advise me accordingly. its a been urgent. public static string Encrypt(string plainText, string key) { //Set up the encryption objects byte[] encryptedBytes = null; using (AesCryptoServiceProvider acsp = GetProvider(Encoding.UTF8.GetBytes(key))) { byte[] sourceBytes = Encoding.UTF8.GetBytes(plainText); using (ICryptoTransform ictE = acsp.CreateEncryptor()) { //Set up stream to contain the encryption using (MemoryStream msS = new MemoryStream()) { //Perform the encrpytion, storing output into the stream using (CryptoStream csS = new CryptoStream(msS, ictE, CryptoStreamMode.Write)) { csS.Write(sourceBytes, 0, sourceBytes.Length); csS.FlushFinalBlock(); //sourceBytes are now encrypted as an array of secure bytes encryptedBytes = msS.ToArray(); //.ToArray() is important, don't mess with the buffer csS.Close(); } msS.Close(); } } acsp.Clear(); } //return the encrypted bytes as a BASE64 encoded string return Convert.ToBase64String(encryptedBytes); } private static AesCryptoServiceProvider GetProvider(byte[] key) { AesCryptoServiceProvider result = new AesCryptoServiceProvider(); result.BlockSize = 128; result.KeySize = 256; result.Mode = CipherMode.CBC; result.Padding = PaddingMode.PKCS7; result.GenerateIV(); result.IV = new byte[] { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; byte[] RealKey = GetKey(key, result); result.Key = RealKey; // result.IV = RealKey; return result; } private static byte[] GetKey(byte[] suggestedKey, SymmetricAlgorithm p) { byte[] kRaw = suggestedKey; List<byte> kList = new List<byte>(); for (int i = 0; i < p.LegalKeySizes[0].MaxSize; i += 8) { kList.Add(kRaw[(i / 8) % kRaw.Length]); } byte[] k = kList.ToArray(); return k; }

    Read the article

  • How can I turn a string of text into a BigInteger representation for use in an El Gamal cryptosystem

    - by angstrom91
    I'm playing with the El Gamal cryptosystem, and my goal is to be able to encipher and decipher long sequences of text. I have come up with a method that works for short sequences, but does not work for long sequences, and I cannot figure out why. El Gamal requires the plaintext to be an integer. I have turned my string into a byte[] using the .getBytes() method for Strings, and then created a BigInteger out of the byte[]. After encryption/decryption, I turn the BigInteger into a byte[] using the .toByteArray() method for BigIntegers, and then create a new String object from the byte[]. This works perfectly when i call ElGamalEncipher with strings up to 129 characters. With 130 or more characters, the output produced is garbled. Can someone suggest how to solve this issue? Is this an issue with my method of turning the string into a BigInteger? If so, is there a better way to turn my string of text into a BigInteger and back? Below is my encipher/decipher code. public static BigInteger[] ElGamalEncipher(String plaintext, BigInteger p, BigInteger g, BigInteger r) { // returns a BigInteger[] cipherText // cipherText[0] is c // cipherText[1] is d BigInteger[] cipherText = new BigInteger[2]; BigInteger pText = new BigInteger(plaintext.getBytes()); // 1: select a random integer k such that 1 <= k <= p-2 BigInteger k = new BigInteger(p.bitLength() - 2, sr); // 2: Compute c = g^k(mod p) BigInteger c = g.modPow(k, p); // 3: Compute d= P*r^k = P(g^a)^k(mod p) BigInteger d = pText.multiply(r.modPow(k, p)).mod(p); // C =(c,d) is the ciphertext cipherText[0] = c; cipherText[1] = d; return cipherText; } public static String ElGamalDecipher(BigInteger c, BigInteger d, BigInteger a, BigInteger p) { //returns the plaintext enciphered as (c,d) // 1: use the private key a to compute the least non-negative residue // of an inverse of (c^a)' (mod p) BigInteger z = c.modPow(a, p).modInverse(p); BigInteger P = z.multiply(d).mod(p); byte[] plainTextArray = P.toByteArray(); String output = null; try { output = new String(plainTextArray, "UTF8"); } catch (Exception e) { } return output; }

    Read the article

  • Does it matter the direction of a Huffman's tree child node?

    - by Omega
    So, I'm on my quest about creating a Java implementation of Huffman's algorithm for compressing/decompressing files (as you might know, ever since Why create a Huffman tree per character instead of a Node?) for a school assignment. I now have a better understanding of how is this thing supposed to work. Wikipedia has a great-looking algorithm here that seemed to make my life way easier. Taken from http://en.wikipedia.org/wiki/Huffman_coding: Create a leaf node for each symbol and add it to the priority queue. While there is more than one node in the queue: Remove the two nodes of highest priority (lowest probability) from the queue Create a new internal node with these two nodes as children and with probability equal to the sum of the two nodes' probabilities. Add the new node to the queue. The remaining node is the root node and the tree is complete. It looks simple and great. However, it left me wondering: when I "merge" two nodes (make them children of a new internal node), does it even matter what direction (left or right) will each node be afterwards? I still don't fully understand Huffman coding, and I'm not very sure if there is a criteria used to tell whether a node should go to the right or to the left. I assumed that, perhaps the highest-frequency node would go to the right, but I've seen some Huffman trees in the web that don't seem to follow such criteria. For instance, Wikipedia's example image http://upload.wikimedia.org/wikipedia/commons/thumb/8/82/Huffman_tree_2.svg/625px-Huffman_tree_2.svg.png seems to put the highest ones to the right. But other images like this one http://thalia.spec.gmu.edu/~pparis/classes/notes_101/img25.gif has them all to the left. However, they're never mixed up in the same image (some to the right and others to the left). So, does it matter? Why?

    Read the article

  • Ruby or Python?

    - by Bobby Tables
    Hi all, This question is extremely subjective and open-ended. It might even sound like something I should just research for myself and make my own decision. But I'd like to put it out there and get some thoughts from others. Long story short - I burned out with the rat race and am on a self-funded sabbatical this year. Much of it is to take a break from the corporate grind and travel around, but I also want to play around with new technologies and do some self-learning projects, to stay up to speed on programming, and well - I just love tinkering with programming, when there's no pressure! Here's the thing: I am a lifetime C/C++/Java programmer. I'm a bit of a squiggly bracket snob since I've been working with this family of languages for my entire programming career. So I'd like to learn a language which isn't so closely syntactically related to this group. What I'm basically looking for is a language which is relatively general purpose, fun to learn, has some new concepts that are different from C++/Java, and has a good community. A secondary consideration is that it has good web development frameworks. A tertiary consideration is that it's not totally academic (read: there are real world jobs out there using it). I've narrowed it down to Ruby or Python. My impression of Ruby is that it is extremely web oriented - that the only real application of it is as a server side scripting language for doing web stuff (mainly Ruby on Rails). For Python I'm not so sure. TL;DR and to put it as succinctly as possible: which of these would be better for a C++/Java guy to learn to get some new perspectives on programming? And which is more open and general purpose and applicable to a wider set of applications? I'm leaning towards Ruby at the moment, but I worry to an extent that it looks like it's used as nothing but a server side web language.

    Read the article

< Previous Page | 705 706 707 708 709 710 711 712 713 714 715 716  | Next Page >