Search Results

Search found 64995 results on 2600 pages for 'data import'.

Page 856/2600 | < Previous Page | 852 853 854 855 856 857 858 859 860 861 862 863  | Next Page >

  • Blender to 3ds max to cal3d format

    - by Kaliber64
    There are quite a few questions on cal3d but they are old and don't apply anymore. In Blender(must be 2.49a for python script to work!!!): I have a scene with 7 meshes, 1 armature, 10 bones. I tried going to one mesh to simplify it but doesn't change anything. I found a small blend file that was used for cal3d and it exported just fine. So I tried to copy it's setup with no success. EDIT*8/13/2012 In the last week here is what I have found so far. I made the mesh in the newest blender(2.62?) and exported it to import it in the old one(2.49a). Did an animation in the old one because importing new blend files to old blenders, its just said it would lose keyframe data and all was good. And then you get the last problem of it not exporting meshes. BUT I found that meshes made in the old one export regardless. I can't find any that won't export. So if I used the old blender to remake my model I could get it to export :) At this point I found a modified release of cal3d (because the most core model variable would not initiate as I made a really small test subject in old blender instead of remaking my big one which took 4 hours.) which fixes the morph objects and adds what cal3d left off with. Under their license they have to release the modification but it has no documentation so I have to figure it out on my own. Its mostly the same. But with this lib it came with a 3ds max exporter. My question now is how do I transfer armature and mesh information from blender to 3ds max in order to export into cal3d format. Every time I try the models are see through and small and there are no bones. The formats I have tried to import are .3ds .obj(mesh only) and COLLADA. In all of them the mesh is invisible and no bones. It says the default texture is on so I should be able to see it. All the vertices are present I found a vertex highlighter so I can see those. If any of this is confusing let me know so I can clear it up. Its late .<=sleep.

    Read the article

  • SharePoint 2010 Hosting :: How to Create an External Content Type SharePoint 2010

    - by mbridge
    In this simple Article trying to show how SharePoint Designer 2010 more the External Content Type to External Database are very easy to create and can be integrated with our SharePoint Portals. You can download SharePoint Designer 2010 here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d88a1505-849b-4587-b854-a7054ee28d66&displaylang=en For this Example I will create a Database in SQL Server and will use SharePoint Designer 2010 to create the connections and use as a mirror from our SharePoint Portal using List and the Database. The first thing we need to do, is connect to SQL Server and create our Database call “Contacts” and add the Table “Contact” with the following fields.  When we create the External Content Type. We  will need to associate the Content Type, in this case i am using the Generic List, then we can create the Connection to the external Data Source. After create the Connection to the Database we can define what Columns we will use and what operations we will add our custom List. For this example i select all Operation they came default. This operation are very important because the Business rules are defined in each operation. After we create the diferent operations we can create the Custom List and define the how will be the Operation and add the Name for our custom List.  If you try to access the New Custom List Call “Custom Contact” you will see we will not have access to the Business Data Connectivity. To Resolve this issue we will need to give Access and permissions to users to the Custom External Content Type BDC connection in the Central administration.  Access to Central Administration Page and select the option “Service Application Tab> Manage Service Application”. There you select the Service “Business Data Connectivity Service” then select “Manage”.  This Option will list all External Content Type, choose the External Content Type we create and select the option “Set Object Permission”, this option will allow to add users to the BDC and manage the permissions to the Custom List.  After the correct permissions are given we can Access to Data on our custom Contact List and start creating new Item and all the other options and operation we define to the same List.  Hope you like this litle Article about connect Database Content to SharePoint Portal using the Externa Content Types and BCS.Thank you.

    Read the article

  • Alexa Traffic Rankings Continuously Inaccurate - Room For Improvement

    There is growing news on the internet that the Alexa Traffic Rankings are not accurate and cannot be used to boast traffic rankings for the average site owner. The main flaw that I have found with Alexa, is their main method of collecting user traffic data, which is mainly through an Alexa Toolbar that collects data from users' browsers that have the toolbars installed. This is my experiment and the results.

    Read the article

  • What is the proper name for this design pattern in Python?

    - by James
    In Python, is the proper name for the PersonXXX class below PersonProxy, PersonInterface, etc? import rest class PersonXXX(object): def __init__(self,db_url): self.resource = rest.Resource(db_url) def create(self,person): self.resource.post(person.data()) def get(self): pass def update(self): pass def delete(self): pass class Person(object): def __init__(self,name, age): self.name = name self.age = age def data(self): return dict(name=self.name,age=self.age)

    Read the article

  • How do I get localized names of application in python?

    - by Mystic-Mirage
    This code gives me only English application name if .desktop file does not have "Name[*]" options (like in totem.desktop) but only "X-Ubuntu-Gettext-Domain: totem": from gi.repository import Gio app = Gio.app_info_get_default_for_type('video/x-flv', True) print app.get_name() This like code gives me proper result for vlc.desktop. Ubuntu Dash shows proper localized names for all applications. How do I get localized names of application in python? Sorry for my English.

    Read the article

  • The SSIS File Sweeper

    Moving files around is a task that many DBAs need to accomplish. Whether as part of an import or export process, or just for administration purposes, this new article from JD Gonzalez can help you solve this problem.

    Read the article

  • Restoring databases to a set drive and directory

    - by okeofs
     Restoring databases to a set drive and directory Introduction Often people say that necessity is the mother of invention. In this case I was faced with the dilemma of having to restore several databases, with multiple ‘ndf’ files, and having to restore them with different physical file names, drives and directories on servers other than the servers from which they originated. As most of us would do, I went to Google to see if I could find some code to achieve this task and found some interesting snippets on Pinal Dave’s website. Naturally, I had to take it further than the code snippet, HOWEVER it was a great place to start. Creating a temp table to hold database file details First off, I created a temp table which would hold the details of the individual data files within the database. Although there are a plethora of fields (within the temp table below), I utilize LogicalName only within this example. The temporary table structure may be seen below:   create table #tmp ( LogicalName nvarchar(128)  ,PhysicalName nvarchar(260)  ,Type char(1)  ,FileGroupName nvarchar(128)  ,Size numeric(20,0)  ,MaxSize numeric(20,0), Fileid tinyint, CreateLSN numeric(25,0), DropLSN numeric(25, 0), UniqueID uniqueidentifier, ReadOnlyLSN numeric(25,0), ReadWriteLSN numeric(25,0), BackupSizeInBytes bigint, SourceBlocSize int, FileGroupId int, LogGroupGUID uniqueidentifier, DifferentialBaseLSN numeric(25,0), DifferentialBaseGUID uniqueidentifier, IsReadOnly bit, IsPresent bit,  TDEThumbPrint varchar(50) )    We now declare and populate a variable(@path), setting the variable to the path to our SOURCE database backup. declare @path varchar(50) set @path = 'P:\DATA\MYDATABASE.bak'   From this point, we insert the file details of our database into the temp table. Note that we do so by utilizing a restore statement HOWEVER doing so in ‘filelistonly’ mode.   insert #tmp EXEC ('restore filelistonly from disk = ''' + @path + '''')   At this point, I depart from what I gleaned from Pinal Dave.   I now instantiate a few more local variables. The use of each variable will be evident within the cursor (which follows):   Declare @RestoreString as Varchar(max) Declare @NRestoreString as NVarchar(max) Declare @LogicalName  as varchar(75) Declare @counter as int Declare @rows as int set @counter = 1 select @rows = COUNT(*) from #tmp  -- Count the number of records in the temp                                    -- table   Declaring and populating the cursor At this point I do realize that many people are cringing about the use of a cursor. Being an Oracle professional as well, I have learnt that there is a time and place for cursors. I would remind the reader that the data that will be read into the cursor is from a local temp table and as such, any locking of the records (within the temp table) is not really an issue.   DECLARE MY_CURSOR Cursor  FOR  Select LogicalName  From #tmp   Parsing the logical names from within the cursor. A small caveat that works in our favour,  is that the first logical name (of our database) is the logical name of the primary data file (.mdf). Other files, except for the very last logical name, belong to secondary data files. The last logical name is that of our database log file.   I now open my cursor and populate the variable @RestoreString Open My_Cursor  set @RestoreString =  'RESTORE DATABASE [MYDATABASE] FROM DISK = N''P:\DATA\ MYDATABASE.bak''' + ' with  '   We now fetch the first record from the temp table.   Fetch NEXT FROM MY_Cursor INTO @LogicalName   While there are STILL records left within the cursor, we dynamically build our restore string. Note that we are using concatenation to create ‘one big restore executable string’.   Note also that the target physical file name is hardwired, as is the target directory.   While (@@FETCH_STATUS <> -1) BEGIN IF (@@FETCH_STATUS <> -2) -- As long as there are no rows missing select @RestoreString = case  when @counter = 1 then -- This is the mdf file    @RestoreString + 'move  N''' + @LogicalName + '''' + ' TO N’’X:\DATA1\'+ @LogicalName + '.mdf' + '''' + ', '   -- OK, if it passes through here we are dealing with an .ndf file -- Note that Counter must be greater than 1 and less than the number of rows.   when @counter > 1 and @counter < @rows then -- These are the ndf file(s)    @RestoreString + 'move  N''' + @LogicalName + '''' + ' TO N’’X:\DATA1\'+ @LogicalName + '.ndf' + '''' + ', '   -- OK, if it passes through here we are dealing with the log file When @LogicalName like '%log%' then    @RestoreString + 'move  N''' + @LogicalName + '''' + ' TO N’’X:\DATA1\'+ @LogicalName + '.ldf' +'''' end --Increment the counter   set @counter = @counter + 1 FETCH NEXT FROM MY_CURSOR INTO @LogicalName END   At this point we have populated the varchar(max) variable @RestoreString with a concatenation of all the necessary file names. What we now need to do is to run the sp_executesql stored procedure, to effect the restore.   First, we must place our ‘concatenated string’ into an nvarchar based variable. Obviously this will only work as long as the length of @RestoreString is less than varchar(max) / 2.   set @NRestoreString = @RestoreString EXEC sp_executesql @NRestoreString   Upon completion of this step, the database should be restored to the server. I now close and deallocate the cursor, and to be clean, I would also drop my temp table.   CLOSE MY_CURSOR DEALLOCATE MY_CURSOR GO   Conclusion Restoration of databases on different servers with different physical names and on different drives are a fact of life. Through the use of a few variables and a simple cursor, we may achieve an efficient and effective way to achieve this task.

    Read the article

  • How to edit pdf metadata from command line?

    - by bdr529
    I need a command line tool for editing metadata of pdf-files. I'm using a Aiptek MyNote Premium tablet for writing my notes and minutes on this device, import them later and convert them to pdf automatically with a simple script using inkscape and ghostscript. Is there any command line tool to add some categories to the pdf's metadata, so i can find the pdf later (e.g. with gnome-do) by categories?

    Read the article

  • XNA Shader Texture Memory

    - by Alex
    I was wondering about texture optimization in XNA 4.0. Will the the contentmanager send the texturedata to the GPU directly when the texture gets loaded or do I send the texture data to the GPU when I declare a texture in my shader. If that's the case, what happens if I have 5 shaders all using the same texture, does that mean that I send 5 instances of that texture data to the gpu or am I simply telling the GPU what preloaded texture to use? Or does XNA do the heavy lifting in the background?

    Read the article

  • Visual studio Real Dark mode (2010,2012,2013)

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/11/02/visual-studio-real-dark-mode-201020122013.aspxWhen Visual studio 2010 released back in 3 year ago I soon show a demo to some people that how Dark mode of Visual studio will be great idea. Soon we got some theme plugin  which make us able to modify the look of visual studio.   http://studiostyl.es/ already provide lots of wonderful color scheme that make you able to modify the theme. These themes are also work in webmatrix 2.  Webmatrix 2 have a plugin for themes that is made by Yishai Galatzer that is awesome for webmatrix 2.   In Visual studio 2012 we got a native dark mode. This means we can configure it without any plugin or requirement of anything. In this post I have a demo to show you how to use Dark mode that is part of Windows 7 (and windows 8 too).   Few months ago I show a problem that webmatrix 2 run slow. it’s run better in windows 7 dark mode. Windows 7 dark mode simply refer to right click > personalize > High contrast theme in bottom of windows. This setting make thing a little bit faster.   When you have set this you have seen that Visual studio doesn’t react good anymore because it’s color scheme is broken now. What you need now is import any theme from http://studiostyl.es/ When you import this this will look good as this.   This is the demo look of Windows 7 phone Express 2010. It will react same for future version as 2012, 2013. Now see your VS react look dark. Everything is dark now. Your Firefox and IE will not run totally in blackish mode but you can use chrome. Chrome have less effect of dark. Now if you benchmark it then you will feel that everything that take a long time in loading now run fast.   Note :- This is experiments. Remember to have settings backup before apply new theme. All thing I do is make my VS run faster. If you have any trouble or idea please comment it.   Thanks for read my post

    Read the article

  • How to use GDK with Quickly

    - by max246
    I trying to use the library gdk for scale down an image and apply it to a GdkImage. This is the code pixbuf = Gdk.pixbuf_new_from_file(fileName) pixbuf = pixbuf.scale_simple(100, 100, Gdk.INTERP_BILINEAR) The problem is that python can't find Gdk even if I use everything in lowercase Error: pixbuf = Gdk.pixbuf_new_from_file(fileName) NameError: global name 'Gdk' is not defined I don't know what should I do because I tried to import Gdk but nothing is changed

    Read the article

  • Pay in the future should make you think in the present

    - by BuckWoody
    Distributed Computing - and more importantly “-as-a-Service” models of computing have a different cost model. This is something that sounds obvious on the surface but it’s often forgotten during the design and coding phase of a project. In on-premises computing, we’re used to purchasing a server and all of the hardware infrastructure and software licenses needed not only for one project, but several. This is an up-front or “sunk” cost that we consume by running code the organization needs to perform its function. Using a direct connection over wires you’ve already paid for, we don’t often have to think about bandwidth, hits on the data store or the amount of compute we use - we just know more is better. In a pay-as-you-go model, however, each of these architecture decisions has a potential cost impact. The amount of data you store, the number of times you access it, and the amount you send back all come with a charge. The offset is that you don’t buy anything at all up-front, so that sunk cost is freed up. And financial professionals know that money now is worth more than money later. Saving that up-front cost allows you to invest it in other things. It’s not just that you’re using things that now cost money - it’s that the design itself in distributed computing has a cost impact. That can be a really good thing, such as when you dynamically add capacity for paying customers. If you can tie back the cost of a series of clicks to what a user will pay to do so, you can set a profit margin that is easy to track. Here’s a case in point: Assume you are using a large instance in Windows Azure to compute some data that you retrieve from a SQL Azure database. If you don’t monitor the path of the application, you may not know what you are really using. Since you’re paying by the size of the instance, it’s best to maximize it all the time. Recently I evaluated just this situation, and found that downsizing the instance and adding another one where needed, adding a caching function to the application, moving part of the data into Windows Azure tables not only increased the speed of the application, but reduced the cost and more closely tied the cost to the profit. The key is this: from the very outset - the design - make sure you include metrics to measure for the cost/performance (sometimes these are the same) for your application. Windows Azure opens up awesome new ways of doing things, so make sure you study distributed systems architecture before you try and force in the application design you have on premises into your new application structure.

    Read the article

  • Projected Results

    - by Sylvie MacKenzie, PMP
    Excerpt from PROFIT - ORACLE - by Monica Mehta Yasser Mahmud has seen a revolution in project management over the past decade. During that time, the former Primavera product strategist (who joined Oracle when his company was acquired in 2008) has not only observed a transformation in the way IT systems support corporate projects but the role project portfolio management (PPM) plays in the enterprise. “15 years ago project management was the domain of project management office (PMO),” Mahmud recalls of earlier days. “But over the course of the past decade, we've seen it transform into a mission critical enterprise discipline, that has made Primavera indispensable in the board room. Now, as a senior manager, a board member, or a C-level executive you have direct and complete visibility into what’s kind of going on in the organization—at a level of detail that you're going to consume that information.” Now serving as Oracle’s vice president of product strategy and industry marketing, Mahmud shares his thoughts on how Oracle’s Primavera solutions have evolved and how best-in-class project portfolio management systems can help businesses stay competitive. Profit: What do you feel are the market dynamics that are changing project management today? Mahmud: First, the data explosion. We're generating data at twice the rate at which we can actually store it. The same concept applies for project-intensive organizations. A lot of data is gathered, but what are we really doing with it? Are we turning data into insight? Are we using that insight and turning it into foresight with analytics tools? This is a key driver that will separate the very good companies—the very competitive companies—from those that are not as competitive. Another trend is centered on the explosion of mobile computing. By the year 2013, an estimated 35 percent of the world’s workforce is going to be mobile. That’s one billion people. So the question is not if you're going to go mobile, it’s how fast you are going to go mobile. What kind of impact does that have on how the workforce participates in projects? What worked ten to fifteen years ago is not going to work today. It requires a real rethink around the interfaces and how data is actually presented. Profit: What is the role of project management in this new landscape? Mahmud: We recently conducted a PPM study with the Economist Intelligence Unit centered to determine how important project management is considered within organizations. Our target was primarily CFOs, CIOs, and senior managers and we discovered that while 95 percent of participants believed it critical to their business, only six percent were confident that projects were delivered on time and on budget. That’s a huge gap. Most organizations are looking for efficiency, especially in these volatile financial times. But senior management can’t keep track of every project in a large organization. As a result, executives are attempting to inventory the work being conducted under their watch. What is often needed is a very high-level assessment conducted at the board level to say, “Here are the 50 initiatives that we have underway. How do they line up with our strategic drivers?” This line of questioning can provide early warning that work and strategy are out of alignment; finding the gap between what the business needs to do and the actual performance scorecard. That’s low-hanging fruit for any executive looking to increase efficiency and save money. But it can only be obtained through proper assessment of existing projects—and you need a project system of record to get that done. Over the next decade or so, project management is going to transform into holistic work management. Business leaders will want make sure key projects align with corporate strategy, but also the ability to drill down into daily activity and smaller projects to make sure they line up as well. Keeping employees from working on tasks—even for a few hours—that don’t line up with corporate goals will, in many ways, become a competitive differentiator. Profit: How do all of these market challenges and shifting trends impact Oracle’s Primavera solutions and meeting customers’ needs? Mahmud: For Primavera, it’s a transformation from being a project management application to a PPM system in the enterprise. Also making that system a mission-critical application by connecting to other key applications within the ecosystem, such as the enterprise resource planning (ERP), supply chain, and CRM systems. Analytics have also become a huge component. Business analytics have made Oracle’s Primavera applications pertinent in the boardroom. Now, as a senior manager, a board member, a CXO, CIO, or CEO, you have direct visibility into what’s going on in the organization at a level that you're able to consume that information. In addition, all of this information pairs up really well with your financials and other data. Certainly, when you're an Oracle shop, you have that visibility that you didn’t have before from a project execution perspective. Profit: What new strategies and tools are being implemented to create a more efficient workplace for users? Mahmud: We believe very strongly that just because you call something an enterprise project portfolio management system doesn’t make it so—you have to get people to want to participate in the system. This can’t be mandated down from the top. It simply doesn’t work that way. A truly adoptable solution is one that makes it super easy for all types users to participate, by providing them interfaces where they live. Keeping that in mind, a major area of development has been alternative user interfaces. This is increasingly resulting in the creation of lighter weight, targeted interfaces such as iOS applications, and smartphones interfaces such as for iPhone and Android platform. Profit: How does this translate into the development of Oracle’s Primavera solutions? Mahmud: Let me give you a few examples. We recently announced the launch of our Primavera P6 Team Member application, which is a native iOS application for the iPhone. This interface makes it easier for team members to do their jobs quickly and effectively. Similarly, we introduced the Primavera analytics application, which can be consumed via mobile devices, and when married with Oracle Spatial capabilities, users can get a geographical view of what’s going on and which projects are occurring in various locations around the world. Lastly, we introduced advanced email integration that allows project team members to status work via E-mail. This functionality leverages the fact that users are in E-mail system throughout the day and allows them to status their work without the need to launch the Primavera application. It comes back to a mantra: provide as many alternative user interfaces as possible, so you can give people the ability to work, to participate, to raise issues, to create projects, in the places where they live. Do it in such a way that it’s non-intrusive, do it in such a way that it’s easy and intuitive and they can get it done in a short amount of time. If you do that, workers can get back to doing what they're actually getting paid for.

    Read the article

  • .Net Micro

    - by MarkPearl
    A while back I purchased a RFID scanner that could be connected to a PC and programmed via VS. It was a fun purchase an though the import duties nailed me, I was glad to get the little gadget. Last night while listening to .Net Rocks I heard of another company that sells similar components for .Net Micro. Check out their websites…. TinyClr GHI Electronics .Net Micro Website Trossen Robotics

    Read the article

  • T-SQL User-Defined Functions: the good, the bad, and the ugly (part 2)

    - by Hugo Kornelis
    In a previous blog post , I demonstrated just how much you can hurt your performance by encapsulating expressions and computations in a user-defined function (UDF). I focused on scalar functions that didn’t include any data access. In this post, I will complete the discussion on scalar UDFs by covering the effect of data access in a scalar UDF. Note that, like the previous post, this all applies to T-SQL user-defined functions only. SQL Server also supports CLR user-defined functions (written in...(read more)

    Read the article

  • Upgrading 10.04LTS -> 10.10 using custom sources

    - by Boatzart
    I'm trying to upgrade to 10.10 from 10.04 LTS using a custom sources.list file that points to an unofficial mirror*. The mirror does have maverick, but I get the following output when upgrading: boatzart@somecomputer: > sudo do-release-upgrade Checking for a new ubuntu release Done Upgrade tool signature Done Upgrade tool Done downloading extracting 'maverick.tar.gz' authenticate 'maverick.tar.gz' against 'maverick.tar.gz.gpg' tar: Removing leading `/' from member names Reading cache Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Updating repository information WARNING: Failed to read mirror file No valid mirror found While scanning your repository information no mirror entry for the upgrade was found. This can happen if you run a internal mirror or if the mirror information is out of date. Do you want to rewrite your 'sources.list' file anyway? If you choose 'Yes' here it will update all 'lucid' to 'maverick' entries. If you select 'No' the upgrade will cancel. Continue [yN] y WARNING: Failed to read mirror file 96% [Working] Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Calculating the changes Calculating the changes Could not calculate the upgrade An unresolvable problem occurred while calculating the upgrade: The package 'update-manager-kde' is marked for removal but it is in the removal blacklist. This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug against the 'update-manager' package and include the files in /var/log/dist-upgrade/ in the bug report. Restoring original system state Aborting Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Here is the relevant section from /var/log/dist-upgrade/main.log: 2010-11-18 14:05:52,117 DEBUG The package 'update-manager-kde' is marked for removal but it's in the removal blacklist 2010-11-18 14:05:52,136 ERROR Dist-upgrade failed: 'The package 'update-manager-kde' is marked for removal but it is in the removal blacklist.' 2010-11-18 14:05:52,136 DEBUG abort called *I'm located inside of USC, and for some crazy reason any sustained downloads to anywhere outside of the University are throttled down to 5kbps inside of my lab. Because of this I need to use the following sources.list: deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid main restricted universe multiverse deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid-updates main restricted universe multiverse deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid-backports main restricted universe multiverse deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid-security main restricted universe multiverse I've tried adding four more entries to the sources.list with s/lucid/maverick/ but that didn't help. Does anyone know how to fix this? Thanks!

    Read the article

  • Why is Double.Parse so slow?

    - by alexhildyard
    I was recently investigating a bottleneck in one of my applications, which read a CSV file from disk using a TextReader a line at a time, split the tokens, called Double.Parse on each one, then shunted the results into an object list. I was surprised to find it was actually the Double.Parse which seemed to be taking up most of the time.Googling turned up this, which is a little unfocused in places but throws out some excellent ideas:It makes more sense to work with binary format directly, rather than coerce strings into doublesThere is a significant performance improvement in composing doubles directly from the byte stream via long intermediariesString.Split is inefficient on fixed length recordsIn fact it turned out that my problem was more insidious and also more mundane -- a simple case of bad data in, bad data out. Since I had been serialising my Doubles as strings, when I inadvertently divided by zero and produced a "NaN", this of course was serialised as well without error. And because I was reading in using Double.Parse, these "NaN" fields were also (correctly) populating real Double objects without error. The issue is that Double.Parse("NaN") is incredibly slow. In fact, it is of the order of 2000x slower than parsing a valid double. For example, the code below gave me results of 357ms to parse 1000 NaNs, versus 15ms to parse 100,000 valid doubles.            const int invalid_iterations = 1000;            const int valid_iterations = invalid_iterations * 100;            const string invalid_string = "NaN";            const string valid_string = "3.14159265";            DateTime start = DateTime.Now;                        for (int i = 0; i < invalid_iterations; i++)            {                double invalid_double = Double.Parse(invalid_string);            }            Console.WriteLine(String.Format("{0} iterations of invalid double, time taken (ms): {1}",                invalid_iterations,                ((TimeSpan)DateTime.Now.Subtract(start)).Milliseconds            ));            start = DateTime.Now;            for (int i = 0; i < valid_iterations; i++)            {                double valid_double = Double.Parse(valid_string);            }            Console.WriteLine(String.Format("{0} iterations of valid double, time taken (ms): {1}",                valid_iterations,                ((TimeSpan)DateTime.Now.Subtract(start)).Milliseconds            )); I think the moral is to look at the context -- specifically the data -- as well as the code itself. Once I had corrected my data, the performance of Double.Parse was perfectly acceptable, and while clearly it could have been improved, it was now sufficient to my needs.

    Read the article

  • Best sources to find your go-to programmer

    - by user66851
    After exhausting many resources, time, interviews etc, I cannot seem to find the correct programming talent for our company. Any other resources you suggest besides Dice, Linkedin, Craigslist, University Job Boards, Poaching techniques....its been months now! Specifically, we designed proprietary data-manipulation and data-gathering technology, and are looking for skilled programmers requiring skills of PHP5/MySQL, Javascript/HTML/CSS , cross-browser compatibility/optimization, web interface development, familiarity with source control (SVN or GIT), any L/AMP stack, and/or related application protocols, GCC-supported languages, Zend Framework and/or jQuery.

    Read the article

  • Java heap space

    - by java_mouse
    In Java/JVM, why do we call the memory place where Java creates objects as "Heap"? Does it use the Heap Data Structure to create/remove/maintain the objects? As I read in the documentation of Heap data structure, the algorithm compares the objects with existing nodes and places them in such a way that Parent object is "greater" than the children. ( Or "lesser" in case of min heap). So in JVM, how are the objects compared against each other before placing them in the heap?

    Read the article

  • Is there a canonical resource on multi-tenancy web applications using ruby + rails

    - by AlexC
    Is there a canonical resource on multi-tenancy web applications using ruby + rails. There are a number of ways to develop rails apps using cloud capabilities with real elastic properties but there seems to be a lack of clarity with how to achieve multitenancy, specifically at the model / data level. Is there a canonical resource on options to developing multitenancy rails applications with the required characteristics of data seperation, security, concurrency and contention required by an enterprise level cloud application.

    Read the article

  • Oracle University New Courses (Week 14)

    - by swalker
    Oracle University released the following new (versions of) courses recently: Database Oracle Data Modeling and Relational Database Design (4 days) Fusion Middleware Oracle Directory Services 11g: Administration (5 days) Oracle Unified Directory 11g: Services Deployment Essentials (2 days) Oracle GoldenGate 11g Management Pack: Overview (1 day) Business Intelligence & Datawarehousing Oracle Database 11g: Data Mining Techniques (2 days) Oracle Solaris Oracle Solaris 10 System Administration for HP-UX Administrators (5 days) E-Business Suite R12.x Oracle Time and Labor Fundamentals Get in contact with your local Oracle University team for more details and course dates. Stay Connected to Oracle University: LinkedIn OracleMix Twitter Facebook Google+

    Read the article

  • The Great PST Migration

    Having recently been on the front lines of a massive PST import operation, Sean Duffy offers advice and points out pitfalls. More than anything, he wishes he had a simple tool with which to banish PST hell, and finishes with some hard-won guidelines.

    Read the article

  • quickly package --extras doesn't produce /opt/extras.ubuntu.com/../share/locale

    - by user75704
    I'm trying to package an app to /opt, but when installed the app won't run and complains: Traceback (most recent call last): File "/opt/extras.ubuntu.com/drawers/bin/drawers", line 45, in <module> import drawers File "/opt/extras.ubuntu.com/drawers/drawers/__init__.py", line 21, in <module> locale.bindtextdomain('drawers', '/opt/extras.ubuntu.com/drawers/share/locale') NameError: name 'locale' is not defined I can't figure out what I need to change. Is there a config file I need to alter?

    Read the article

  • Commnunity Technology Update (CTU) 2011

    - by Aman Garg
    Spoke at the session on Webforms in CTU 2011 (Community Technology Update) in Singapore. Had a good interaction with the Developer community here in Singapore. I covered the following topics during the session:   *Dynamic Data *Routing *Web Form Additions         *Predictable Client IDs          *Programmable Meta Data           *Better control over ViewState           *Persist selected rows *Web Deployment   The Slide Deck used can be accessed using the following URL: http://www.slideshare.net/amangarg516/web-forms-im-still-alive

    Read the article

< Previous Page | 852 853 854 855 856 857 858 859 860 861 862 863  | Next Page >