Search Results

Search found 59554 results on 2383 pages for 'distributed data mining'.

Page 729/2383 | < Previous Page | 725 726 727 728 729 730 731 732 733 734 735 736  | Next Page >

  • Advertising Opportunity – Profit Magazine For Oracle OpenWorld

    - by tfryer
    With Oracle OpenWorld fast approaching, Profit Magazine is offering Oracle Specialized partners the opportunity to extend their brand to executive-level Oracle customers and top prospects in the Profit Magazine: Specialized Partner Edition. The printed magazine will be distributed to Oracle attendees at Oracle OpenWorld San Francisco, and the digital copy will be distribution to over 500,000 customers in the Profit readers circle. In addition, the magazine will be promoted via social media such as Facebook, LinkedIn, and Twitter. For a very affordable advertising opportunity, please contact Tom Cometa at [email protected] or +1.510.339.2403. Reserve before July 27th. Hurry! An early bird discount of 15% applies if booked before July 18th.

    Read the article

  • Software license restricting commercial usage like CC BY-NC-SA

    - by Nick
    I want to distribute my software under license like Creative Commons Attribution - Non commercial - Share Alike license, i.e. Redistribution of source code and binaries is freely. Modified version of program have to be distributed under the same license. Attribution to original project should be supplied to. Restrict any kind of commercial usage. However CC does not recommend to use their licenses for software. Is there this kind of software license I could apply? Better if public license, but as far as I know US laws says that only EULA could restrict usage of received copy?

    Read the article

  • Restoring databases to a set drive and directory

    - by okeofs
     Restoring databases to a set drive and directory Introduction Often people say that necessity is the mother of invention. In this case I was faced with the dilemma of having to restore several databases, with multiple ‘ndf’ files, and having to restore them with different physical file names, drives and directories on servers other than the servers from which they originated. As most of us would do, I went to Google to see if I could find some code to achieve this task and found some interesting snippets on Pinal Dave’s website. Naturally, I had to take it further than the code snippet, HOWEVER it was a great place to start. Creating a temp table to hold database file details First off, I created a temp table which would hold the details of the individual data files within the database. Although there are a plethora of fields (within the temp table below), I utilize LogicalName only within this example. The temporary table structure may be seen below:   create table #tmp ( LogicalName nvarchar(128)  ,PhysicalName nvarchar(260)  ,Type char(1)  ,FileGroupName nvarchar(128)  ,Size numeric(20,0)  ,MaxSize numeric(20,0), Fileid tinyint, CreateLSN numeric(25,0), DropLSN numeric(25, 0), UniqueID uniqueidentifier, ReadOnlyLSN numeric(25,0), ReadWriteLSN numeric(25,0), BackupSizeInBytes bigint, SourceBlocSize int, FileGroupId int, LogGroupGUID uniqueidentifier, DifferentialBaseLSN numeric(25,0), DifferentialBaseGUID uniqueidentifier, IsReadOnly bit, IsPresent bit,  TDEThumbPrint varchar(50) )    We now declare and populate a variable(@path), setting the variable to the path to our SOURCE database backup. declare @path varchar(50) set @path = 'P:\DATA\MYDATABASE.bak'   From this point, we insert the file details of our database into the temp table. Note that we do so by utilizing a restore statement HOWEVER doing so in ‘filelistonly’ mode.   insert #tmp EXEC ('restore filelistonly from disk = ''' + @path + '''')   At this point, I depart from what I gleaned from Pinal Dave.   I now instantiate a few more local variables. The use of each variable will be evident within the cursor (which follows):   Declare @RestoreString as Varchar(max) Declare @NRestoreString as NVarchar(max) Declare @LogicalName  as varchar(75) Declare @counter as int Declare @rows as int set @counter = 1 select @rows = COUNT(*) from #tmp  -- Count the number of records in the temp                                    -- table   Declaring and populating the cursor At this point I do realize that many people are cringing about the use of a cursor. Being an Oracle professional as well, I have learnt that there is a time and place for cursors. I would remind the reader that the data that will be read into the cursor is from a local temp table and as such, any locking of the records (within the temp table) is not really an issue.   DECLARE MY_CURSOR Cursor  FOR  Select LogicalName  From #tmp   Parsing the logical names from within the cursor. A small caveat that works in our favour,  is that the first logical name (of our database) is the logical name of the primary data file (.mdf). Other files, except for the very last logical name, belong to secondary data files. The last logical name is that of our database log file.   I now open my cursor and populate the variable @RestoreString Open My_Cursor  set @RestoreString =  'RESTORE DATABASE [MYDATABASE] FROM DISK = N''P:\DATA\ MYDATABASE.bak''' + ' with  '   We now fetch the first record from the temp table.   Fetch NEXT FROM MY_Cursor INTO @LogicalName   While there are STILL records left within the cursor, we dynamically build our restore string. Note that we are using concatenation to create ‘one big restore executable string’.   Note also that the target physical file name is hardwired, as is the target directory.   While (@@FETCH_STATUS <> -1) BEGIN IF (@@FETCH_STATUS <> -2) -- As long as there are no rows missing select @RestoreString = case  when @counter = 1 then -- This is the mdf file    @RestoreString + 'move  N''' + @LogicalName + '''' + ' TO N’’X:\DATA1\'+ @LogicalName + '.mdf' + '''' + ', '   -- OK, if it passes through here we are dealing with an .ndf file -- Note that Counter must be greater than 1 and less than the number of rows.   when @counter > 1 and @counter < @rows then -- These are the ndf file(s)    @RestoreString + 'move  N''' + @LogicalName + '''' + ' TO N’’X:\DATA1\'+ @LogicalName + '.ndf' + '''' + ', '   -- OK, if it passes through here we are dealing with the log file When @LogicalName like '%log%' then    @RestoreString + 'move  N''' + @LogicalName + '''' + ' TO N’’X:\DATA1\'+ @LogicalName + '.ldf' +'''' end --Increment the counter   set @counter = @counter + 1 FETCH NEXT FROM MY_CURSOR INTO @LogicalName END   At this point we have populated the varchar(max) variable @RestoreString with a concatenation of all the necessary file names. What we now need to do is to run the sp_executesql stored procedure, to effect the restore.   First, we must place our ‘concatenated string’ into an nvarchar based variable. Obviously this will only work as long as the length of @RestoreString is less than varchar(max) / 2.   set @NRestoreString = @RestoreString EXEC sp_executesql @NRestoreString   Upon completion of this step, the database should be restored to the server. I now close and deallocate the cursor, and to be clean, I would also drop my temp table.   CLOSE MY_CURSOR DEALLOCATE MY_CURSOR GO   Conclusion Restoration of databases on different servers with different physical names and on different drives are a fact of life. Through the use of a few variables and a simple cursor, we may achieve an efficient and effective way to achieve this task.

    Read the article

  • svn vs git for the sole developer? [closed]

    - by nattyP
    If I am sole developer (I do not work in a team) working from my laptop (Windows OS and Linux VM) and backing up data to the cloud (Dropbox etc), then is git still better than svn for my version control needs? I was thinking not since I wont need any of git's distributed features. But is git such a better approach to version control that I should consider moving anyway? With so many articles saying how people are moving from svn to git? I was wondering, if they are talking about large or open projects with teams of developers vs the sole developer. What do you think?

    Read the article

  • SharePoint 2010 Hosting :: How to Create an External Content Type SharePoint 2010

    - by mbridge
    In this simple Article trying to show how SharePoint Designer 2010 more the External Content Type to External Database are very easy to create and can be integrated with our SharePoint Portals. You can download SharePoint Designer 2010 here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d88a1505-849b-4587-b854-a7054ee28d66&displaylang=en For this Example I will create a Database in SQL Server and will use SharePoint Designer 2010 to create the connections and use as a mirror from our SharePoint Portal using List and the Database. The first thing we need to do, is connect to SQL Server and create our Database call “Contacts” and add the Table “Contact” with the following fields.  When we create the External Content Type. We  will need to associate the Content Type, in this case i am using the Generic List, then we can create the Connection to the external Data Source. After create the Connection to the Database we can define what Columns we will use and what operations we will add our custom List. For this example i select all Operation they came default. This operation are very important because the Business rules are defined in each operation. After we create the diferent operations we can create the Custom List and define the how will be the Operation and add the Name for our custom List.  If you try to access the New Custom List Call “Custom Contact” you will see we will not have access to the Business Data Connectivity. To Resolve this issue we will need to give Access and permissions to users to the Custom External Content Type BDC connection in the Central administration.  Access to Central Administration Page and select the option “Service Application Tab> Manage Service Application”. There you select the Service “Business Data Connectivity Service” then select “Manage”.  This Option will list all External Content Type, choose the External Content Type we create and select the option “Set Object Permission”, this option will allow to add users to the BDC and manage the permissions to the Custom List.  After the correct permissions are given we can Access to Data on our custom Contact List and start creating new Item and all the other options and operation we define to the same List.  Hope you like this litle Article about connect Database Content to SharePoint Portal using the Externa Content Types and BCS.Thank you.

    Read the article

  • Where can I find "magic numbers" for classic game play mechanics?

    - by MrDatabase
    I'd like to find some "magic numbers" for the classic helicopter game. For example the numbers that determine how fast the helicopter accelerates up and down. Also perhaps the "randomness" of the obstacles (uniformly distributed? Gaussian?). Where can I find these numbers? p.s. I don't care about the particular platform... Flash on the desktop browser is just as good as some implementation on a mobile device.

    Read the article

  • My Last "Catch-Up" Post for 2010 Content

    - by KKline
    I did a lot of writing in 2010. Unfortunately, I didn't do a good job of keeping all of that writing equally distributed throughout all of the channels where I'm active. So here are a few more posts from my blog, put on-line during the months of November and December 2010, that I didn't get posted here on SQLBlog.com: 1. It's Time to Upgrade! So many of my customers and many of you, dear readers, are still on SQL Server 2005. Join Kevin Kline , SQL Server MVP and SQL Server Technology Strategist...(read more)

    Read the article

  • Java heap space

    - by java_mouse
    In Java/JVM, why do we call the memory place where Java creates objects as "Heap"? Does it use the Heap Data Structure to create/remove/maintain the objects? As I read in the documentation of Heap data structure, the algorithm compares the objects with existing nodes and places them in such a way that Parent object is "greater" than the children. ( Or "lesser" in case of min heap). So in JVM, how are the objects compared against each other before placing them in the heap?

    Read the article

  • What is the proper name for this design pattern in Python?

    - by James
    In Python, is the proper name for the PersonXXX class below PersonProxy, PersonInterface, etc? import rest class PersonXXX(object): def __init__(self,db_url): self.resource = rest.Resource(db_url) def create(self,person): self.resource.post(person.data()) def get(self): pass def update(self): pass def delete(self): pass class Person(object): def __init__(self,name, age): self.name = name self.age = age def data(self): return dict(name=self.name,age=self.age)

    Read the article

  • Cloud Integration White Paper - Now Available

    - by Bruce Tierney
    Interested in expanding your existing application infrastructure to integrate with cloud applications?  Download the new Oracle White Paper "Cloud Integration - A Comprehensive Solution" to learn not just about connectivity but the other key aspects of successful cloud integration. The paper includes three technical examples of cloud integration with Oracle Fusion Applications, Saleforce, and Workday and follows with the importance of taking a comprehensive approach to also include service aggregation, service virtualization, cloud security considerations and the benefit of maintaining a unified approach to monitoring and management despite an increasingly distributed hybrid infrastructure. To keep the integration architecture from being defined "accidentally" as new business units subscribe to additional cloud vendors outside the participation of IT, a discussion on the "Accidental SOA Cloud Architecture" is included: As shown in the table of contents below, the white paper provides a combination of high-level awareness about key considerations as well as a technical deep dive of the steps needed for cloud integration connectivity: Hope you find the White Paper valuable.  Please download from the following link

    Read the article

  • Projected Results

    - by Sylvie MacKenzie, PMP
    Excerpt from PROFIT - ORACLE - by Monica Mehta Yasser Mahmud has seen a revolution in project management over the past decade. During that time, the former Primavera product strategist (who joined Oracle when his company was acquired in 2008) has not only observed a transformation in the way IT systems support corporate projects but the role project portfolio management (PPM) plays in the enterprise. “15 years ago project management was the domain of project management office (PMO),” Mahmud recalls of earlier days. “But over the course of the past decade, we've seen it transform into a mission critical enterprise discipline, that has made Primavera indispensable in the board room. Now, as a senior manager, a board member, or a C-level executive you have direct and complete visibility into what’s kind of going on in the organization—at a level of detail that you're going to consume that information.” Now serving as Oracle’s vice president of product strategy and industry marketing, Mahmud shares his thoughts on how Oracle’s Primavera solutions have evolved and how best-in-class project portfolio management systems can help businesses stay competitive. Profit: What do you feel are the market dynamics that are changing project management today? Mahmud: First, the data explosion. We're generating data at twice the rate at which we can actually store it. The same concept applies for project-intensive organizations. A lot of data is gathered, but what are we really doing with it? Are we turning data into insight? Are we using that insight and turning it into foresight with analytics tools? This is a key driver that will separate the very good companies—the very competitive companies—from those that are not as competitive. Another trend is centered on the explosion of mobile computing. By the year 2013, an estimated 35 percent of the world’s workforce is going to be mobile. That’s one billion people. So the question is not if you're going to go mobile, it’s how fast you are going to go mobile. What kind of impact does that have on how the workforce participates in projects? What worked ten to fifteen years ago is not going to work today. It requires a real rethink around the interfaces and how data is actually presented. Profit: What is the role of project management in this new landscape? Mahmud: We recently conducted a PPM study with the Economist Intelligence Unit centered to determine how important project management is considered within organizations. Our target was primarily CFOs, CIOs, and senior managers and we discovered that while 95 percent of participants believed it critical to their business, only six percent were confident that projects were delivered on time and on budget. That’s a huge gap. Most organizations are looking for efficiency, especially in these volatile financial times. But senior management can’t keep track of every project in a large organization. As a result, executives are attempting to inventory the work being conducted under their watch. What is often needed is a very high-level assessment conducted at the board level to say, “Here are the 50 initiatives that we have underway. How do they line up with our strategic drivers?” This line of questioning can provide early warning that work and strategy are out of alignment; finding the gap between what the business needs to do and the actual performance scorecard. That’s low-hanging fruit for any executive looking to increase efficiency and save money. But it can only be obtained through proper assessment of existing projects—and you need a project system of record to get that done. Over the next decade or so, project management is going to transform into holistic work management. Business leaders will want make sure key projects align with corporate strategy, but also the ability to drill down into daily activity and smaller projects to make sure they line up as well. Keeping employees from working on tasks—even for a few hours—that don’t line up with corporate goals will, in many ways, become a competitive differentiator. Profit: How do all of these market challenges and shifting trends impact Oracle’s Primavera solutions and meeting customers’ needs? Mahmud: For Primavera, it’s a transformation from being a project management application to a PPM system in the enterprise. Also making that system a mission-critical application by connecting to other key applications within the ecosystem, such as the enterprise resource planning (ERP), supply chain, and CRM systems. Analytics have also become a huge component. Business analytics have made Oracle’s Primavera applications pertinent in the boardroom. Now, as a senior manager, a board member, a CXO, CIO, or CEO, you have direct visibility into what’s going on in the organization at a level that you're able to consume that information. In addition, all of this information pairs up really well with your financials and other data. Certainly, when you're an Oracle shop, you have that visibility that you didn’t have before from a project execution perspective. Profit: What new strategies and tools are being implemented to create a more efficient workplace for users? Mahmud: We believe very strongly that just because you call something an enterprise project portfolio management system doesn’t make it so—you have to get people to want to participate in the system. This can’t be mandated down from the top. It simply doesn’t work that way. A truly adoptable solution is one that makes it super easy for all types users to participate, by providing them interfaces where they live. Keeping that in mind, a major area of development has been alternative user interfaces. This is increasingly resulting in the creation of lighter weight, targeted interfaces such as iOS applications, and smartphones interfaces such as for iPhone and Android platform. Profit: How does this translate into the development of Oracle’s Primavera solutions? Mahmud: Let me give you a few examples. We recently announced the launch of our Primavera P6 Team Member application, which is a native iOS application for the iPhone. This interface makes it easier for team members to do their jobs quickly and effectively. Similarly, we introduced the Primavera analytics application, which can be consumed via mobile devices, and when married with Oracle Spatial capabilities, users can get a geographical view of what’s going on and which projects are occurring in various locations around the world. Lastly, we introduced advanced email integration that allows project team members to status work via E-mail. This functionality leverages the fact that users are in E-mail system throughout the day and allows them to status their work without the need to launch the Primavera application. It comes back to a mantra: provide as many alternative user interfaces as possible, so you can give people the ability to work, to participate, to raise issues, to create projects, in the places where they live. Do it in such a way that it’s non-intrusive, do it in such a way that it’s easy and intuitive and they can get it done in a short amount of time. If you do that, workers can get back to doing what they're actually getting paid for.

    Read the article

  • Best sources to find your go-to programmer

    - by user66851
    After exhausting many resources, time, interviews etc, I cannot seem to find the correct programming talent for our company. Any other resources you suggest besides Dice, Linkedin, Craigslist, University Job Boards, Poaching techniques....its been months now! Specifically, we designed proprietary data-manipulation and data-gathering technology, and are looking for skilled programmers requiring skills of PHP5/MySQL, Javascript/HTML/CSS , cross-browser compatibility/optimization, web interface development, familiarity with source control (SVN or GIT), any L/AMP stack, and/or related application protocols, GCC-supported languages, Zend Framework and/or jQuery.

    Read the article

  • Is there a canonical resource on multi-tenancy web applications using ruby + rails

    - by AlexC
    Is there a canonical resource on multi-tenancy web applications using ruby + rails. There are a number of ways to develop rails apps using cloud capabilities with real elastic properties but there seems to be a lack of clarity with how to achieve multitenancy, specifically at the model / data level. Is there a canonical resource on options to developing multitenancy rails applications with the required characteristics of data seperation, security, concurrency and contention required by an enterprise level cloud application.

    Read the article

  • XNA Shader Texture Memory

    - by Alex
    I was wondering about texture optimization in XNA 4.0. Will the the contentmanager send the texturedata to the GPU directly when the texture gets loaded or do I send the texture data to the GPU when I declare a texture in my shader. If that's the case, what happens if I have 5 shaders all using the same texture, does that mean that I send 5 instances of that texture data to the gpu or am I simply telling the GPU what preloaded texture to use? Or does XNA do the heavy lifting in the background?

    Read the article

  • Alexa Traffic Rankings Continuously Inaccurate - Room For Improvement

    There is growing news on the internet that the Alexa Traffic Rankings are not accurate and cannot be used to boast traffic rankings for the average site owner. The main flaw that I have found with Alexa, is their main method of collecting user traffic data, which is mainly through an Alexa Toolbar that collects data from users' browsers that have the toolbars installed. This is my experiment and the results.

    Read the article

  • T-SQL User-Defined Functions: the good, the bad, and the ugly (part 2)

    - by Hugo Kornelis
    In a previous blog post , I demonstrated just how much you can hurt your performance by encapsulating expressions and computations in a user-defined function (UDF). I focused on scalar functions that didn’t include any data access. In this post, I will complete the discussion on scalar UDFs by covering the effect of data access in a scalar UDF. Note that, like the previous post, this all applies to T-SQL user-defined functions only. SQL Server also supports CLR user-defined functions (written in...(read more)

    Read the article

  • Upgrading 10.04LTS -> 10.10 using custom sources

    - by Boatzart
    I'm trying to upgrade to 10.10 from 10.04 LTS using a custom sources.list file that points to an unofficial mirror*. The mirror does have maverick, but I get the following output when upgrading: boatzart@somecomputer: > sudo do-release-upgrade Checking for a new ubuntu release Done Upgrade tool signature Done Upgrade tool Done downloading extracting 'maverick.tar.gz' authenticate 'maverick.tar.gz' against 'maverick.tar.gz.gpg' tar: Removing leading `/' from member names Reading cache Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Updating repository information WARNING: Failed to read mirror file No valid mirror found While scanning your repository information no mirror entry for the upgrade was found. This can happen if you run a internal mirror or if the mirror information is out of date. Do you want to rewrite your 'sources.list' file anyway? If you choose 'Yes' here it will update all 'lucid' to 'maverick' entries. If you select 'No' the upgrade will cancel. Continue [yN] y WARNING: Failed to read mirror file 96% [Working] Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Calculating the changes Calculating the changes Could not calculate the upgrade An unresolvable problem occurred while calculating the upgrade: The package 'update-manager-kde' is marked for removal but it is in the removal blacklist. This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug against the 'update-manager' package and include the files in /var/log/dist-upgrade/ in the bug report. Restoring original system state Aborting Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Here is the relevant section from /var/log/dist-upgrade/main.log: 2010-11-18 14:05:52,117 DEBUG The package 'update-manager-kde' is marked for removal but it's in the removal blacklist 2010-11-18 14:05:52,136 ERROR Dist-upgrade failed: 'The package 'update-manager-kde' is marked for removal but it is in the removal blacklist.' 2010-11-18 14:05:52,136 DEBUG abort called *I'm located inside of USC, and for some crazy reason any sustained downloads to anywhere outside of the University are throttled down to 5kbps inside of my lab. Because of this I need to use the following sources.list: deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid main restricted universe multiverse deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid-updates main restricted universe multiverse deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid-backports main restricted universe multiverse deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid-security main restricted universe multiverse I've tried adding four more entries to the sources.list with s/lucid/maverick/ but that didn't help. Does anyone know how to fix this? Thanks!

    Read the article

  • How can I distribute a unique database already in production?

    - by JVerstry
    Let's assume a successful web Spring application running on a MySQL or PostgreSQL database. The traffic is becoming so high and the amount of data is becoming so big that a distributed database solution needs to be implemented to address scalability issue. Let's also assume this application is using Hibernate and the data access layer is cleanly separated with DAOs. Ideally, one should be able to add or remove databases easily. A failback solution is welcome too. What would be the best strategy to scale this database? Is it possible to minimize sharding code (Shard) in the application?

    Read the article

  • Why is Double.Parse so slow?

    - by alexhildyard
    I was recently investigating a bottleneck in one of my applications, which read a CSV file from disk using a TextReader a line at a time, split the tokens, called Double.Parse on each one, then shunted the results into an object list. I was surprised to find it was actually the Double.Parse which seemed to be taking up most of the time.Googling turned up this, which is a little unfocused in places but throws out some excellent ideas:It makes more sense to work with binary format directly, rather than coerce strings into doublesThere is a significant performance improvement in composing doubles directly from the byte stream via long intermediariesString.Split is inefficient on fixed length recordsIn fact it turned out that my problem was more insidious and also more mundane -- a simple case of bad data in, bad data out. Since I had been serialising my Doubles as strings, when I inadvertently divided by zero and produced a "NaN", this of course was serialised as well without error. And because I was reading in using Double.Parse, these "NaN" fields were also (correctly) populating real Double objects without error. The issue is that Double.Parse("NaN") is incredibly slow. In fact, it is of the order of 2000x slower than parsing a valid double. For example, the code below gave me results of 357ms to parse 1000 NaNs, versus 15ms to parse 100,000 valid doubles.            const int invalid_iterations = 1000;            const int valid_iterations = invalid_iterations * 100;            const string invalid_string = "NaN";            const string valid_string = "3.14159265";            DateTime start = DateTime.Now;                        for (int i = 0; i < invalid_iterations; i++)            {                double invalid_double = Double.Parse(invalid_string);            }            Console.WriteLine(String.Format("{0} iterations of invalid double, time taken (ms): {1}",                invalid_iterations,                ((TimeSpan)DateTime.Now.Subtract(start)).Milliseconds            ));            start = DateTime.Now;            for (int i = 0; i < valid_iterations; i++)            {                double valid_double = Double.Parse(valid_string);            }            Console.WriteLine(String.Format("{0} iterations of valid double, time taken (ms): {1}",                valid_iterations,                ((TimeSpan)DateTime.Now.Subtract(start)).Milliseconds            )); I think the moral is to look at the context -- specifically the data -- as well as the code itself. Once I had corrected my data, the performance of Double.Parse was perfectly acceptable, and while clearly it could have been improved, it was now sufficient to my needs.

    Read the article

  • Commnunity Technology Update (CTU) 2011

    - by Aman Garg
    Spoke at the session on Webforms in CTU 2011 (Community Technology Update) in Singapore. Had a good interaction with the Developer community here in Singapore. I covered the following topics during the session:   *Dynamic Data *Routing *Web Form Additions         *Predictable Client IDs          *Programmable Meta Data           *Better control over ViewState           *Persist selected rows *Web Deployment   The Slide Deck used can be accessed using the following URL: http://www.slideshare.net/amangarg516/web-forms-im-still-alive

    Read the article

  • Using EPEL repos with Oracle Linux

    - by wcoekaer
    There's a Fedora project called EPEL which hosts a set of additional packages that can be installed on top of various distributions such as Red Hat Enterprise Linux, CentOS, Scientific Linux and of course also Oracle Linux. These packages are not distributed by the distribution vendor and as such also not supported by the vendors (including Oracle) however for users that want to pick up some extras that are useful, it's very easy to do this. All you need to do is download the EPEL RPM from the website, install it on Oracle Linux 5 or Oracle Linux 6 and run yum install or yum search to get the packages. example : # wget http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-5.noarch.rpm # rpm -ivh epel-release-6-5.noarch.rpm # yum repolist Loaded plugins: refresh-packagekit, rhnplugin repo id repo name status epel Extra Packages for Enterprise Linux 6 - x86_64 7,124 The folks that build these repositories are doing a great job at adding very useful packages. They are free, but also unsupported of course.

    Read the article

  • Exadata X3 Expandability Update

    - by Bandari Huang
    Exadata Database Machine X3-2 Data Sheet New 10/29/2012 Up to 18 racks can be connected without requiring additional InfiniBand switches. Exadata Database Machine X3-8 Data Sheet New 10/24/2012 Scale by connecting multiple Exadata Database Machine X3-8 racks or Exadata Storage Expansion Racks. Up to 18 racks can be connected by simply connecting via InfiniBand cables. Larger configurations can be built with additional InfiniBand switches.  

    Read the article

  • Need help identifing what resources (eg. In MIT OpenCourseWare) can help me prepare for a test [closed]

    - by jiewmeng
    I am entering uni soon. I can sit for a placement test to see if I elegible for exemptions. The details are http://www.comp.nus.edu.sg/undergraduates/TestScope11_12.html Or CS2100 Computer Organisation (please click title) The objective of this module is to familiarise students with the fundamentals of computing devices. Through this module students will understand the basics of data representation, and how the various parts of a computer work, separately and with each other. This allows students to understand the issues in computing devices, and how these issues affect the implementation of solutions. Topics covered include data representation systems, combinational and sequential circuit design techniques, assembly language, processor execution cycles, pipelining, memory hierarchy and input/output systems. Recommended Textbooks Digital Design: Principles and Practices [DDPP] by John F. Wakerly, Prentice-Hall. ISBN 0-13-324500-4. Computer Organizations and Design (The hardware/software interface) by David A. Patterson and John L. Hennessy. CS2105 Introduction to Computer Networks (please click title) This course aims to provide a broad introduction to computer networks and some appreciations of network application programming. It covers a range of topics including basic data communication and computer network concepts, protocols, networked computing concepts and principles, network applications development and network security. The emphasis of teaching is on the working principles and application of computer networks. As an integral part of the course, tutorials and practical assignments enforcing learning will also be given. These assignments provide an early exposure in network application programming and they should be able to complete by using personal computers and school's network facilities. Topics included: An overview of computer networks and the Internet Basic data communications Application layer Transport layer Network layer and routing Link layer and local area networks Recommended Textbook James F. Kurose & Keith W. Ross, Computer networking: A top-down approach featuring internet, Addison Wesley, 2001 I am wondering what resources eg. MIT OpenCourseWare or other universities resources are available to help he perpare for these particular modubles. I am thinking does the Networking one look like CCNA? The computer oragization. Its like electronics, assembly etc? I learnt some electronics in Poly but looking at the sample papers, uni looks very different... I have about 1 month to prepare if I want any chance of exempting from these modules :) any help?

    Read the article

  • What actions should I not rely on the packaged functionality of my language for?

    - by David Peterman
    While talking with one of my coworkers, he was talking about the issues the language we used had with encryption/decryption and said that a developer should always salt their own hashes. Another example I can think of is the mysql_real_escape_string in PHP that programmers use to sanitize input data. I've heard many times that a developer should sanitize the data themselves. My question is what things should a developer always do on their own, for whatever reason, and not rely on the standard libraries packaged with a language for it?

    Read the article

< Previous Page | 725 726 727 728 729 730 731 732 733 734 735 736  | Next Page >