Search Results

Search found 13608 results on 545 pages for 'performance dashboard'.

Page 483/545 | < Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >

  • Hibernate database integrity with multiple java applications

    - by Austen
    We have 2 java web apps both are read/write and 3 standalone java read/write applications (one loads questions via email, one processes an xml feed, one sends email to subscribers) all use hibernate and share a common code base. The problem we have recently come across is that questions loaded via email sometimes overwrite questions created in one of the web apps. We originally thought this to be a caching issue. We've tried turning off the second level cache, but this doesn't make a difference. We are not explicitly opening and closing sessions, but rather let hibernate manage them via Util.getSessionFactory().getCurrentSession(), which thinking about it, may actually be the issue. We'd rather not setup a clustered 2nd level cache at this stage as this creates another layer of complexity and we're more than happy with the level of performance we get from the app as a whole. So does implementing a open-session-in-view pattern in the web apps and manually managing the sessions in the standalone apps sound like it would fix this? Or any other suggestions/ideas please? <property name="hibernate.transaction.factory_class">org.hibernate.transaction.JDBCTransactionFactory</property> <property name="hibernate.current_session_context_class">thread</property> <property name="hibernate.cache.use_second_level_cache">false</property>

    Read the article

  • Java SSH2 libraries in depth: Trilead/Ganymed/Orion [/other?]

    - by Bernd Haug
    I have been searching for a pure Java SSH library to use for a project. The single most important needed feature is that it has to be able to work with command-line git, but remote-controlling command-line tools is also important. A pretty common choice, e.g. used in the IntelliJ IDEA git integration (which works very well), seems to be Trilead SSH2. Looking at their website, it's not being maintained any more. Trilead seems to have been a fork of Ganymed SSH2, which was a ETH Zurich project that didn't see releases for a while, but had a recent release by its new owner, Christian Plattner. There is another actively maintained fork from that code base, Orion SSH, that saw an even more recent release, but which seems to get mentioned online much less than the other 2 forks. Has anybody here worked with any of (or, if possible, both) of Ganymed and Orion and could kindly describe the development experience with either/both? Accuracy of documentation [existence of documentation?], stability, buggyness... - all of these would be highly interesting to me. Performance is not so important for my current project. If there is another pure-Java SSH implementation that should be used instead, please feel free to mention it, but please don't just mention a name...describe your judgment from actual experience. Sorry if this question may seem a bit "do my homework"-y, but I've really searched for reviews. Everything out there seems to be either a listing of implementations or short "use this! it's great!" snippets.

    Read the article

  • Database for Python Twisted

    - by Will
    There's an API for Twisted apps to talk to a database in a scalable way: twisted.enterprise.dbapi The confusing thing is, which database to pick? The database will have a Twisted app that is mostly making inserts and updates and relatively few selects, and then other strictly-read-only clients that are accessing the database directly making selects. (The read-only users are not necessarily selecting the data that the Twisted app is inserting; its not as though the database is being used as a message-queue) My understanding - which I'd like corrected/adviced - is that: Postgres is a great DB, but all the Python bindings - and there is a confusing maze of them - are abandonware There is psycopg2, but that makes a lot of noise about doing its own connection-pooling and things; does this co-exist gracefully/usefully/transparently with the Twisted async database connection pooling and such? SQLLite is a great database for little things but if used in a multi-user way it does whole-database locking, so performance would suck in the usage pattern I envisage MySQL - after the Oracle takeover, who'd want to adopt it now or adopt a fork? Is there anything else out there?

    Read the article

  • SSIS - Bulk Update at Database Field Level

    - by Adam
    Hello, Here's our mission: Receive files from clients. Each file contains anywhere from 1 to 1,000,000 records. Records are loaded to a staging area and business-rule validation is applied. Valid records are then pumped into an OLTP database in a batch fashion, with the following rules: If record does not exist (we have a key, so this isn't an issue), create it. If record exists, optionally update each database field. The decision is made based on one of 3 factors...I don't believe it's important what those factors are. Our main problem is finding an efficient method of optionally updating the data at a field level. This is applicable across ~12 different database tables, with anywhere from 10 to 150 fields in each table (original DB design leaves much to be desired, but it is what it is). Our first attempt has been to introduce a table that mirrors the staging environment (1 field in staging for each system field) and contains a masking flag. The value of the masking flag represents the 3 factors. We've then put an UPDATE similar to... UPDATE OLTPTable1 SET Field1 = CASE WHEN Mask.Field1 = 0 THEN Staging.Field1 WHEN Mask.Field1 = 1 THEN COALESCE( Staging.Field1 , OLTPTable1.Field1 ) WHEN Mask.Field1 = 2 THEN COALESCE( OLTPTable1.Field1 , Staging.Field1 ) ... As you can imagine, the performance is rather horrendous. Has anyone tackled a similar requirement? We're a MS shop using a Windows Service to launch SSIS packages that handle the data processing. Unfortunately, we're pretty much novices at this stuff.

    Read the article

  • Determining the color of a pixel in a bitmap using C# in a WPF app

    - by DanM
    The only way I found so far is System.Drawing.Bitmap.GetPixel(), but Microsoft has warnings for System.Drawing that are making me wonder if this is the "old way" to do it. Are there any alternatives? Here's what Microsoft says about the System.Drawing namespace. I also noticed that the System.Drawing assembly was not automatically added to the references when I created a new WPF project. System.Drawing Namespace The System.Drawing namespace provides access to GDI+ basic graphics functionality. More advanced functionality is provided in the System.Drawing.Drawing2D, System.Drawing.Imaging, and System.Drawing.Text namespaces. The Graphics class provides methods for drawing to the display device. Classes such as Rectangle and Point encapsulate GDI+ primitives. The Pen class is used to draw lines and curves, while classes derived from the abstract class Brush are used to fill the interiors of shapes. Caution Classes within the System.Drawing namespace are not supported for use within a Windows or ASP.NET service. Attempting to use these classes from within one of these application types may produce unexpected problems, such as diminished service performance and run-time exceptions. - http://msdn.microsoft.com/en-us/library/system.drawing.aspx

    Read the article

  • MySQL Gurus: How to pull a complex grid of data from MySQL database with one query?

    - by iopener
    Hopefully this is less complex than I think. I have one table of companies, and another table of jobs, and a third table with that contains a single entry for each employee in each job from each company. NOTE: Some companies won't have employees in some jobs, and some companies will have more than one employee in some jobs. The company table has a companyid and companyname field, the job table has a jobid and jobtitle field, and the employee table has employeeid, companyid, jobid and employeename fields. I want to build a table like this: +-----------+-----------+-----------+ | Company A | Company B | Company C | ------+-----------+-----------+-----------+ Job A | Emp 1 | Emp 2 | | ------+-----------+-----------+-----------+ Job B | Emp 3 | | Emp 4 | | | | Emp 5 | ------+-----------+-----------+-----------+ Job C | | Emp 6 | | | | Emp 7 | | | | Emp 8 | | ------+-----------+-----------+-----------+ I had previously been looping through a result set of jobs, and for each job, looping through a result set of each company, and for each company, looping through each employee and printing it in a table (gross, but performance was not supposed to be a consideration). The app has grown in popularity, and now we have 100 companies and hundreds of jobs, and the server is crapping out (all the id fields are indexed). Any suggestions on how to write a single query to get this data? I don't need the company names or job titles (obviously), but I do need some way to identify where each row from the result should be printed. I'm imagining a result set that just contained a long list of joined employees, and I could write a loop to use the companyid and employeeid values to tell me when to create a new cell or table row. This works as long as there aren't ZERO employees; I would need a NULL employee name for that I think? Am I completely on the wrong track? Thanks in advance for any ideas!

    Read the article

  • Legitimate uses of the Function constructor

    - by Marcel Korpel
    As repeatedly said, it is considered bad practice to use the Function constructor (also see the ECMAScript Language Specification, 5th edition, § 15.3.2.1): new Function ([arg1[, arg2[, … argN]],] functionBody) (where all arguments are strings containing argument names and the last (or only) string contains the function body). To recapitulate, it is said to be slow, as explained by the Opera team: Each time […] the Function constructor is called on a string representing source code, the script engine must start the machinery that converts the source code to executable code. This is usually expensive for performance – easily a hundred times more expensive than a simple function call, for example. (Mark ‘Tarquin’ Wilton-Jones) Though it's not that bad, according to this post on MDC (I didn't test this myself using the current version of Firefox, though). Crockford adds that [t]he quoting conventions of the language make it very difficult to correctly express a function body as a string. In the string form, early error checking cannot be done. […] And it is wasteful of memory because each function requires its own independent implementation. Another difference is that a function defined by a Function constructor does not inherit any scope other than the global scope (which all functions inherit). (MDC) Apart from this, you have to be attentive to avoid injection of malicious code, when you create a new Function using dynamic contents. Lots of disadvantages and it is intelligible that ECMAScript 5 discourages the use of the Function constructor by throwing an exception when using it in strict mode (§ 13.1). That said, T.J. Crowder says in an answer that [t]here's almost never any need for the similar […] new Function(...), either, again except for some advanced edge cases. So, now I am wondering: what are these “advanced edge cases”? Are there legitimate uses of the Function constructor?

    Read the article

  • How to avoid multiple, unused has_many associations when using multiple models for the same entity (

    - by mikep
    Hello, I'm looking for a nice, Ruby/Rails-esque solution for something. I'm trying to split up some data using multiple tables, rather than just using one gigantic table. My reasoning is pretty much to try and avoid the performance drop that would come with having a big table. So, rather than have one table called books, I have multiple tables: books1, books2, books3, etc. (I know that I could use a partition, but, for now, I've decided to go the 'multiple tables' route.) Each user has their books placed into a specific table. The actual book table is chosen when the user is created, and all of their books go into the same table. The goal is to try and keep each table pretty much even -- but that's a different issue. One thing I don't particularly want to have is a bunch of unused associations in the User class. Right now, it looks like I'd have to do the following: class User < ActiveRecord::Base has_many :books1, :books2, :books3, :books4, :books5 end class Books1 < ActiveRecord::Base belongs_to :user end class Books2 < ActiveRecord::Base belongs_to :user end First off, for each specific user, only one of the book tables would be usable/applicable, since all of a user's books are stored in the same table. So, only one of the associations would be in use at any time and any other has_many :bookX association that was loaded would be a waste. I don't really know Ruby/Rails does internally with all of those has_many associations though, so maybe it's not so bad. But right now I'm thinking that it's really wasteful, and that there may just be a better, more efficient way of doing this. Is there's some sort of special Ruby/Rails methodology that could be applied here to avoid having to have all of those has_many associations? Also, does anyone have any advice on how to abstract the fact that there's multiple book tables behind a single books model/class?

    Read the article

  • Is it safe to use random Unicode for complex delimiter sequences in strings?

    - by ccomet
    Question: In terms of program stability and ensuring that the system will actually operate, how safe is it to use chars like ¦, § or ‡ for complex delimiter sequences in strings? Can I reliable believe that I won't run into any issues in a program reading these incorrectly? I am working in a system, using C# code, in which I have to store a fairly complex set of information within a single string. The readability of this string is only necessary on the computer side, end-users should only ever see the information after it has been parsed by the appropriate methods. Because some of the data in these strings will be collections of variable size, I use different delimiters to identify what parts of the string correspond to a certain tier of organization. There are enough cases that the standard sets of ;, |, and similar ilk have been exhausted. I considered two-char delimiters, like ;# or ;|, but I felt that it would be very inefficient. There probably isn't that large of a performance difference in storing with one char versus two chars, but when I have the option of picking the smaller option, it just feels wrong to pick the larger one. So finally, I considered using the set of characters like the double dagger and section. They only take up one char, and they are definitely not going to show up in the actual text that I'll be storing, so they won't be confused for anything. But character encoding is finicky. While the visibility to the end user is meaningless (since they, in fact, won't see it), I became recently concerned about how the programs in the system will read it. The string is stored in one database, while a separate program is responsible for both encoding and decoding the string into different object types for the rest of the application to work with. And if something is expected to be written one way, is possibly written another, then maybe the whole system will fail and I can't really let that happen. So is it safe to use these kind of chars for background delimiters?

    Read the article

  • Creating content for rails-based applications

    - by Matthias Hryniszak
    Hi, I'm facing a problem of cleaning up my application in Ruby on Rails. What I have is a pretty standard 3-panel, header and footer layout where different parts of the screen contain different functionality. By that I mean for example that the header contains (among others) a select that allows one to select parts of the application and a context-dependent menu. The main content area contains obviously the most interactive stuff whereas side panels contain quick-links with stuff like shopping-cart preview, list of potentially attractive products for the customer, a selector to narrow down the list of options... I was wondering how do I go about simplifying the design. Right now I have the stuff that provides data for the "common" stuff (as opposed to direct content that's placed in the center) called from all the actions (with a filter) but that doesn't feel right for me. I've read that "components" are also not the way to go for obvious performance reasons. Is there something that's more like component-oriented (other frameworks do have that kind of stuff - Grails: <ui:include ../>, ASP.NET MVC: <% Html.RenderAction() %>)? Best regards, Matthias.

    Read the article

  • Ruby 1.9 GarbageCollector, GC.disable/enable

    - by seb
    I'm developing a Rails 2.3, Ruby 1.9.1 webapplication that does quite a bunch of calculation before each request. For every request it has to calculate a graph with 300 nodes and ~1000 edges. The graph and all its nodes, edges and other objects are initialized for every request (~2000 objects) - actually they are cloned from an uncalculated cached graph using Marshal.load(Marshal.dump()). Performance is quite an issue here. Right now the whole request takes in average 150ms. I then saw that during a request, parts of the calculation randomly take longer. Assuming, that this might be the GarbageCollector kicking in, I wrapped the request in GC.disable and GC.enable, so that the request waits with garbagecollecting until calculating and rendering have finished. def query GC.disable calculate respond_to do |format| format.html {render} end GC.enable end The average request now takes about 100ms (50 ms less). But I'm unsure if this is a good/stable solution, I assume there must be drawbacks doing that. Does anybody has experience with a similar problem or sees problems with the above code?

    Read the article

  • Threading Practice with Polling.

    - by Stacey
    I have a C# application that has to constantly read from a program; sometimes there is a chance it will not find what it needs, which will throw an exception. This is a limitation of the program it has to read from. This frequently causes the program to lock up as it tries to poll. So I solved it by spawning the 'polling' off into a separate thread. However watching the debugger, the thread is created and destroyed each time. I am uncertain if this is typical or not; but my question is, is this good practice, or am I using the threading for the wrong purpose? ProgramReader { static Thread oThread; public static void Read( Program program ) { // check to see if the program exists if ( false ) oThread = new ThreadStart(program.Poll); if(oThread != null || !oThread.IsAlive ) oThread.Start(); } } This is my general pseudocode. It runs every 10 seconds or so. Is this a huge hit to performance? The operation it performs is relatively small and lightweight; just repetitive.

    Read the article

  • call addsubview again causes slowdown

    - by Tom
    hi guys, i am writing a little music-game for the iphone. I am almost done, this is the only issue which keeps me from rolling it out. any help to solve this is much appreciated. this is what i do: at my appDelegate I add my menu-view-screen to the window. the menu-view-screen acts as a container and controls which view gets presented to the user. means, on the menu-view-screen I got 4 buttons (new game, options, faq, highscore). when the user clicks on a button something as this happens: if (self.gameViewController == nil) { GameViewController *viewController = [[GameViewController alloc] initWithNibName:@"GameViewController" bundle:nil]; self.gameViewController = viewController; [viewController release]; } [self.view addSubview:self.gameViewController.view]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(handleSwitchViewNotificationFromGameView:) name:@"SwitchView" object:gameViewController]; when the user returns to the menu, this piece of code gets executed: [[NSNotificationCenter defaultCenter] removeObserver:self]; [self.gameViewController viewWillDisappear:YES]; [self.gameViewController.view removeFromSuperview]; this works fine for all screens but not for the gamescreen(well this is the only one with heaps of user-interaction) means the responsiveness of the iphone(when playing tones) gets really slow. The performance is fine when I display the gameview for the first time. it starts getting slower as soon as I add it to the menu-views-container-subviews again (addsubview) (basically open up a new game) any ideas what causes(or to get around) this? thanks heaps Best regards Tom

    Read the article

  • Porting - Shared Memory x32 & x64 processes

    - by dpb
    A 32 bit host Windows application setups shared memory (using memory mapped file / CreateFileMapping() API), and then other 32 bit client processes use this shared memory to communicate with each other. I am planning to port the host application to 64 bit platform and once it is ready, I intend that both 32 bit and 64 bit client processes should be able to use the shared memory setup by the main 64 bit host application. The original code written for host x32 application uses "size_t" almost everywhere, since this differs from 4 bytes to 8 bytes as we move from x32 to x64, I am looking for replacing it. I intend to replace "size_t" by "unsigned long long", so that its size will be same on 32 bit & 64 bit. Can you please suggest me better alternative? Also, will the use of "unsigned long long" have performance impact on x32 app .. i guess yes? Research Done - Found very useful articles - a) 20 issue in porting from 32 bit to 64 bit (www.viva64.com) b) No way to restrict/change "size_t" on x64 platform to 4 bytes using compiler flags or any hooks/crooks since it is typedef

    Read the article

  • Practical rules for premature optimization

    - by DougW
    It seems that the phrase "Premature Optimization" is the buzz-word of the day. For some reason, iphone programmers in particular seem to think of avoiding premature optimization as a pro-active goal, rather than the natural result of simply avoiding distraction. The problem is, the term is beginning to be applied more and more to cases that are completely inappropriate. For example, I've seen a growing number of people say not to worry about the complexity of an algorithm, because that's premature optimization (eg http://stackoverflow.com/questions/2190275/help-sorting-an-nsarray-across-two-properties-with-nssortdescriptor/2191720#2191720). Frankly, I think this is just laziness, and appalling to disciplined computer science. But it has occurred to me that maybe considering the complexity and performance of algorithms is going the way of assembly loop unrolling, and other optimization techniques that are now considered unnecessary. What do you think? Are we at the point now where deciding between an O(n^n) and O(n!) complexity algorithm is irrelevant? What about O(n) vs O(n*n)? What do you consider "premature optimization"? What practical rules do you use to consciously or unconsciously avoid it? This is a bit vague, but I'm curious to hear other peoples' opinions on the topic.

    Read the article

  • PHP Database connection practice

    - by Phill Pafford
    I have a script that connects to multiple databases (Oracle, MySQL and MSSQL), each database connection might not be used each time the script runs but all could be used in a single script execution. My question is, "Is it better to connect to all the databases once in the beginning of the script even though all the connections might not be used. Or is it better to connect to them as needed, the only catch is that I would need to have the connection call in a loop (so the database connection would be new for X amount of times in the loop). Yeah Example Code #1: // Connections at the beginning of the script $dbh_oracle = connect2db(); $dbh_mysql = connect2db(); $dbh_mssql = connect2db(); for ($i=1; $i<=5; $i++) { // NOTE: might not use all the connections $rs = queryDb($query,$dbh_*); // $dbh can be any of the 3 connections } Yeah Example Code #2: // Connections in the loop for ($i=1; $i<=5; $i++) { // NOTE: Would use all the connections but connecting multiple times $dbh_oracle = connect2db(); $dbh_mysql = connect2db(); $dbh_mssql = connect2db(); $rs_oracle = queryDb($query,$dbh_oracle); $rs_mysql = queryDb($query,$dbh_mysql); $rs_mssql = queryDb($query,$dbh_mssql); } now I know you could use a persistent connection but would that be one connection open for each database in the loop as well? Like mysql_pconnect(), mssql_pconnect() and adodb for Oracle persistent connection method. I know that persistent connection can also be resource hogs and as I'm looking for best performance/practice.

    Read the article

  • Complex LINQ paging algorithm

    - by sharepointmonkey
    We have a list of projects that may or may not have a collection of subprojects. Our report needs to contain all the projects except those that are the parent project of a subproject. I need to page this into pages of, say, 25 rows. But if subprojects appear on that page then ALL the subprojects of that project must appear on the same page. So more than 25 items may appear if necessary. I've got as far as var pagedProjects = db.Projects.Where(x => !x.SubProjects.Any()).Skip( (pageNo -1) * pageSize).Take(pageSize); Obviously, this fails the second part of the requirements. As a further pain in the arse, I need to have a pager control on the report. So I'll need to be able to calculate the total number of pages. I could loop through the whole table of projects but the performance will suffer. Can anybody come up with a paged solution? EDIT - I should probably mention that SubProjects joins back onto Projects via a selfreferencing foreign key so the whole lot comes back as an IQueryable<Project>.

    Read the article

  • Database and logic layer for ASP.NET MVC application

    - by Ismail
    I'm going to start a new project which is going to be small initially but may grow to big over the years. I'm strongly convinced that I'm going to use ASP.NET MVC with jQuery for UI. I want to go for MySQL as database for some reasons but worried on few things. I've a good years of experience working on SQL Server databases and on one project I've had a bad experience creating and managing stored procedures on MySQL database. I'm totally new to Linq but I see that it is easier to use once you are familiar with it. First thing is that accessing data should be easy. So I thought I should use MySQL to Linq but somewhere I read that it is not directly supported but MySQL .NET connector adds support for EntityFramework. I don't know what are the pros and cons of it. I would love if I can implement repository pattern as it allows to apply filter in logic layer rather than in data access layer. Will it be possible if I use Entity Framework? I'm not clear on how I should go about all this or I should just forget every thing and directly use SQL to Linq on SQL Server. I'm also concerned about the performance. Someone told me that if we use Entity framework it fetches lot of data and then filter it. Is that right? So questions basically are - Is MySQL to Linq possible? If yes where can I get more details on it? Pros and cons of using EntityFramework with MySQL? Will it be easy to access data using EntityFramework with MySQL? Will I be able to implement repository patter which allows applying filter in logic layer rather than data access layer (when I use EntityFramework with MySQL) Does it fetches hell lot of data from database and then apply filter on it? If it sounds too many questions from my side in that case, if you can just let me know what you will do (with a considerable reason) in this situation as an experienced person in this area, that should answer my question.

    Read the article

  • OpenGL equivalent of GDI's HatchBrush or PatternBrush?

    - by Ptah- Opener of the Mouth
    I have a VB6 application (please don't laugh) which does a lot of drawing via BitBlt and the standard VB6 drawing functions. I am running up against performance issues (yes, I do the regular tricks like drawing to memory). So, I decided to investigate other ways of drawing, and have come upon OpenGL. I've been doing some experimenting, and it seems straightforward to do most of what I want; the application mostly only uses very simple drawing -- relatively large 2D rectangles of solid colors and such -- but I haven't been able to find an equivalent to something like a HatchBrush or PatternBrush. More specifically, I want to be able to specify a small monochrome pixel pattern, choose a color, and whenever I draw a polygon (or whatever), instead of it being solid, have it automatically tiled with that pattern, not translated or rotated or skewed or stretched, with the "on" bits of the pattern showing up in the specified color, and the "off" bits of the pattern left displaying whatever had been drawn under the area that I am now drawing on. Obviously I could do all the calculations myself. That is, instead of drawing as a polygon which will somehow automatically be tiled for me, I could calculate all of the lines or pixels or whatever that actually need to be drawn, then draw them as lines or pixels or whatever. But is there an easier way? Like in GDI, where you just say "draw this polygon using this brush"? I am guessing that "textures" might be able to accomplish what I want, but it's not clear to me (I'm totally new to this and the documentation I've found is not entirely obvious); it seems like textures might skew or translate or stretch the pattern, based upon the vertices of the polygon? Whereas I want the pattern tiled. Is there a way to do this, or something like it, other than brute force calculation of exactly the pixels/lines/whatever that need to be drawn? Thanks in advance for any help.

    Read the article

  • Ultra-Portable Laptop or Tablet PC for Development and Sketching

    - by Nelson LaQuet
    I am a software developer that primarily writes in PHP, [X]HTML, CSS, Javascript, C# and C++. I use Eclipse for web development, Visual Studio 2008 for C++ and C# work, TortoiseSVN, Subversion server for local repositories, SQL Server Express, Apache and MYSQL. I also use Office 2007 for word processing and spreadsheets and use Vista Ultimate 64 as my primary operating system. The only other things I do on my laptop are watch movies, surf the internet and listen to music. I currently have a Acer Aspire 5100 (1.4 GHz AMD Turion X2, 2 GB of RAM and a 15.4" screen). This thing does not cut it in performance or portability, and in addition, my DVD drive failed. And before anybody posts about vista: I have had XP Professional 32 on it for the last two years, and recently upgraded to Vista 64. It is actually faster (with areo disabled) then XP; so it is not the OS that is causing the laptop to be slow. I usually sketch a lot, for explaining things, developing user interfaces and software architecture. Because of my requirements, I was thinking about a Lenovo X61 Tablet PC. It outperforms my current laptop, is significantly more portable, and... is a tablet. My question is: do any other software developers use this (or other tablets) for programming? Does it help to be able to sketch on the computer itself? And is it capable of being a good development machine? Will it handle the above software listed? If not, what is the best ultra-portable laptop that is good for programming? Or are ultra-portable laptops even good for programming? I could manage with my 15.4" screen, but am spoiled by my two 19" at my home desktop and my job's workstation.

    Read the article

  • What is the the relation between programming and mathematics?

    - by Math Grad
    Programmers seem to think that their work is quite mathematical. I understand this when you try to optimize something in performance, find the most efficient alogithm, etc.. But it patently seems false when you look at a billing application for a shop, or a systems software riddled with I/O calls. So what is it exactly? Is computation and associated programming really mathematical? Here I have in mind particularly the words of the philosopher Schopenhauer in mind: That arithmetic is the basest of all mental activities is proved by the fact that it is the only one that can be accomplished by means of a machine. Take, for instance, the reckoning machines that are so commonly used in England at the present time, and solely for the sake of convenience. But all analysis finitorum et infinitorum is fundamentally based on calculation. Therefore we may gauge the “profound sense of the mathematician,” of whom Lichtenberg has made fun, in that he says: “These so-called professors of mathematics have taken advantage of the ingenuousness of other people, have attained the credit of possessing profound sense, which strongly resembles the theologians’ profound sense of their own holiness.” I lifted the above quote from here. It seems that programmers are doing precisely the sort of mechanized base mental activity the grand old man is contemptuous about. So what exactly is the deal? Is programming really the "good" kind of mathematics, or just the baser type, or altogether something else just meant for business not to be confused with a pure discipline?

    Read the article

  • Interface for reading variable length files with header and footer.

    - by John S
    I could use some hints or tips for a decent interface for reading file of special characteristics. The files in question has a header (~120 bytes), a body (1 byte - 3gb) and a footer (4 bytes). The header contains information about the body and the footer is only a simple CRC32-value of the body. I use Java so my idea was to extend the "InputStream" class and add a constructor such as "public MyInStream( InputStream in)" where I immediately read the header and the direct the overridden read()'s the body. Problem is, I can't give the user of the class the CRC32-value until the whole body has been read. Because the file can be 3gb large, putting it all in memory is a be an idea. Reading it all in to a temporary file is going to be a performance hit if there are many small files. I don't know how large the file is because the InputStream doesn't have to be a file, it could be a socket. Looking at it again, maybe extending InputStream is a bad idea. Thank you for reading the confused thoughts of a tired programmer. :)

    Read the article

  • Checking for empty arrays: count vs empty

    - by Dan McG
    This question on 'How to tell if a PHP array is empty' had me thinking of this question Is there a reason that count should be used instead of empty when determining if an array is empty or not? My personal thought would be if the 2 are equivalent for the case of empty arrays you should use empty because it gives a boolean answer to a boolean question. From the question linked above, it seems that count($var) == 0 is the popular method. To me, while technically correct, makes no sense. E.g. Q: $var, are you empty? A: 7. Hmmm... Is there a reason I should use count == 0 instead or just a matter of personal taste? As pointed out by others in comments for a now deleted answer, count will have performance impacts for large arrays because it will have to count all elements, whereas empty can stop as soon as it knows it isn't empty. So, if they give the same results in this case, but count is potentially inefficient, why would we ever use count($var) == 0?

    Read the article

  • SQL Server becomes slow after restart

    - by Tobi DM
    We use SQL Server 2005 on an Windwos Server 2008. Ther Server has 48 GB RAM. SQL Server is configured to use 40 GB RAM. There is only one database hosted (About 70 GB). The only app beside SQL Server is our App-Server which connects the clients to the database. Now we encounter the following problem: After a restart of the server our the performance is great. The server grabs the 40 GB RAM wich it is allowed to and then runs fast as hell. But after about 4 weeks the system becomes slower and slower. The execution of statements (seen in the profiler) is raising slowly. But I cannot see that there is something going wrong on the server. CPU usage is at about 20% I/O also seems to be no Problem The process monitor does also not show that there are strange apps or something like that. Eventlog does also have no interessting messages No open transactions or blockings to see We tried already the following things without effect: Droped the cache by using the statements DBCC FreeProcCache DBCC FREESYSTEMCACHE('ALL') DBCC DropCleanbuffers Restarted the Appserver we are using. Restart the sql server service But nothing did help exept restarting the whole server. Any ideas?

    Read the article

  • gcc, strict-aliasing, and horror stories

    - by Joseph Quinsey
    In http://stackoverflow.com/questions/2906365/gcc-strict-aliasing-and-casting-through-a-union I asked whether anyone had encountered problems with union punning through pointers. So far, the answer seems to be No. This question is broader: do you have any horror stories about gcc and strict-aliasing? Background: Quoting from AndreyT's answer in http://stackoverflow.com/questions/2771023/c99-strict-aliasing-rules-in-c-gcc/2771041#2771041: "Strict aliasing rules are rooted in parts of the standard that were present in C and C++ since the beginning of [standardized] times. The clause that prohibits accessing object of one type through a lvalue of another type is present in C89/90 (6.3) as well as in C++98 (3.10/15). ... It is just that not all compilers wanted (or dared) to enforce it or rely on it." Well, gcc is now daring to do so, with its -fstrict-aliasing switch. And this has caused some problems. See, for example, the excellent article http://davmac.wordpress.com/2009/10/ about a Mysql bug, and the equally excellent discussion in http://cellperformance.beyond3d.com/articles/2006/06/understanding-strict-aliasing.html. Some other less-relevant links: http://stackoverflow.com/questions/1225741/performance-impact-of-fno-strict-aliasing http://stackoverflow.com/questions/754929/strict-aliasing http://stackoverflow.com/questions/262379/when-is-char-safe-for-strict-pointer-aliasing http://stackoverflow.com/questions/725138/how-to-detect-strict-aliasing-at-compile-time So to repeat, do you have a horror story of your own? Problems not indicated by -Wstrict-aliasing would, of course, be preferred. And other C compilers are also welcome.

    Read the article

< Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >