Search Results

Search found 19555 results on 783 pages for 'job performance'.

Page 585/783 | < Previous Page | 581 582 583 584 585 586 587 588 589 590 591 592  | Next Page >

  • Python file input string: how to handle escaped unicode characters?

    - by Michi
    In a text file (test.txt), my string looks like this: Gro\u00DFbritannien Reading it, python escapes the backslash: >>> file = open('test.txt', 'r') >>> input = file.readline() >>> input 'Gro\\u00DFbritannien' How can I have this interpreted as unicode? decode() and unicode() won't do the job. The following code writes Gro\u00DFbritannien back to the file, but I want it to be Großbritannien >>> input.decode('latin-1') u'Gro\\u00DFbritannien' >>> out = codecs.open('out.txt', 'w', 'utf-8') >>> out.write(input)

    Read the article

  • Can I start my new Career as a web application developer over an age of 32

    - by Sami
    Greetings Guys.. Iam 32 years old, I graduated from university in 2005 but from that time I didnt work in my career as a developer,and I dont have any experience in that major. My current career is software testing, but actually iam not satisfied in that job since i dont see any future for it and i dont know its path (to where will I arrive). Now i decided to take extra cources in VB.net, asp.net since I want to change my career to become webdeveloper. But 1 thing that always desturb me is that I feel that time is passed iam iam too old to become web developer. Is my feeling true?? and are there any poeple who start programing at a late age and did the succeed?? Thanks

    Read the article

  • Collection wrapping a array is read-only. Possible to make it writeable without casting?

    - by Brian Triplett
    I have a Collection<T> property that wraps a array like T[] array; public Collection<T> Items { get { return new Collection<T>(array); } } When I attempt to assign to the collection via: T variable; Items[i] = variable; I get a NotSupportedException because the colleciton's IsReadOnly property is true. Turns out that this is a design choice by Microsoft. Does anyone know a workaround that does NOT involve enumeration? It could be done if the underlying data is not an array but I enjoy the performance gains because the data is fixed length.

    Read the article

  • pros and cons of TryCatch versus TryParse

    - by Vijesh
    What are the pros and cons of using either of the following approaches to pulling out a double from an object? Beyond just personal preferences, issues I'm looking for feedback on include ease of debugging, performance, maintainability etc. public static double GetDouble(object input, double defaultVal) { try { return Convert.ToDouble(input); } catch { return defaultVal; } } public static double GetDouble(object input, double defaultVal) { double returnVal; if (double.TryParse(input.ToString(), out returnVal)) { return returnVal; } else { return defaultVal; } }

    Read the article

  • .NET Reflector and getters/setters issue

    - by Humberto
    I'm using an up-to-date .NET Reflector to disassemble an internal legacy app whose source code is almost impossible to recover. I need to find the cause of a nasty bug, and then possibly patch it. Reflector did a good job as usual in the re-creation of the project's structure, but soon I discovered that every property call was left "expanded" to its get_() and set_() method signatures, rendering the source code impossible to compile. A quick Visual Studio "Search/Replace" with regex solved these cases, but it's awkward. Is there a way to make Reflector behave correctly?

    Read the article

  • What are the pros and cons to keeping SQL in Stored Procs versus Code

    - by Guy
    What are the advantages/disadvantages of keeping SQL in your C# source code or in Stored Procs? I've been discussing this with a friend on an open source project that we're working on (C# ASP.NET Forum). At the moment, most of the database access is done by building the SQL inline in C# and calling to the SQL Server DB. So I'm trying to establish which, for this particular project, would be best. So far I have: Advantages for in Code: Easier to maintain - don't need to run a SQL script to update queries Easier to port to another DB - no procs to port Advantages for Stored Procs: Performance Security

    Read the article

  • Most Efficient Alternative Method of Storing Settings for iPhone Apps

    - by JPK
    I am not using the Settings bundle to store the settings for my app, as I prefer to allow the user to access the settings within the app (they may be changed fairly often). I do realize that there is the option to do both, but for now, I am trying to find the most optimal place to store the settings within the app. I have a good number of settings (from what I have read, probably too many for NSUserDefaults), and the two main options I am considering are: 1) storing the settings in a dictionary in the plist, loading the settings into a NSDictionary property in the app delegate and accessing them via the sharedDelegate 2) storing the settings in a Core Data entity (1 row on Settings entity), loading the settings into a Settings object in the app delegate and accessing them via the sharedDelegate Of these two, which would be the optimal method, performance wise?

    Read the article

  • Function Object in Java .

    - by Tony
    I wanna implement a javascript like method in java , is this possible ? Say , I have a Person class : public class Person { private String name ; private int age ; // constructor ,accessors are omitted } And a list with Person objects: Person p1 = new Person("Jenny",20); Person p2 = new Person("Kate",22); List<Person> pList = Arrays.asList(new Person[] {p1,p2}); I wanna implement a method like this: modList(pList,new Operation (Person p) { incrementAge(Person p) { p.setAge(p.getAge() + 1)}; }); modList receives two params , one is a list , the other is the "Function object", it loops the list ,and apply this function to every element in the list. In functional programming language,this is easy , I don't know how java do this? Maybe could be done through dynamic proxy, does that have a performance trade off?

    Read the article

  • apache mod_rewrite redirection problem

    - by warttack
    ok i am having a problem with redirection on apache, i have a domain configured on my hosting account but the domain needs to be redirected to a folder. eg: / is root of server where the mysite.com answers /mysite is where the files are so i got this htaccess code to do the job: Options +FollowSymLinks RewriteEngine on RewriteCond %{HTTP_HOST} ^(www.)?mysite.com$ RewriteCond %{REQUEST_URI} !^/mysite/ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /mysite/$1 RewriteCond %{HTTP_HOST} ^(www.)?mysite.com$ RewriteRule ^(/)?$ mysite/index.php [L] plus i made an index.php to redirect to mysite folder. everything seems to be working good the only problem is i added a forum on /mysite/forums/ and for some reason instead of getting mysite.com/forums/ in the browser im getting mysite.com/mysite/forums/ could anyone help me solve this problem? Thanks in advance!

    Read the article

  • Android EditText and addTextChangedListener

    - by Alex
    im currently porting a database manager to android and due to performance reasons i like to update only propertys that have been modified. Im trying to do this with the addTextChangedListener in order to add modified entrys to a List, but my Program never enters any of its methods. EditText Et = (EditText) Editors.get(Prop.Name); Et.addTextChangedListener(new TextWatcher() { @Override public void afterTextChanged(Editable s) { // TODO Auto-generated method stub } @Override public void beforeTextChanged(CharSequence s, int start, int count, int after) { // TODO Auto-generated method stub } @Override public void onTextChanged(CharSequence s, int start, int before, int count) { // TODO Auto-generated method stub if(Prop.GetType() == Property.PROPTYPE.num) { float f = Float.parseFloat(s.toString()); Prop.FromString(f); } else { Prop.FromString(s.toString()); } propertiesToUpdate.add(Prop); }); Et.setText(Prop.ToString());

    Read the article

  • Am I reindexing this Sphinx index correctly?

    - by Ethan
    According to the Thinking Sphinx docs... Turning on delta indexing does not remove the need for regularly running a full re-index ... So I set up this cron job... 50 10 * * * cd /var/www/my_app/current && /opt/ruby/bin/rake thinking_sphinx:index RAILS_ENV=production >> /var/www/my_app/current/log/reindexing.log 2>&1 Is that a reasonable way to do it? Should I be doing something different?

    Read the article

  • Repository vs Data Access

    - by vdh_ant
    Hi guys In the context of the n-tier application, is there a difference between what you would consider your data access classes to be and your repositories? I tend to think yes but I just wanted to see what other thought. My thinking is that the job of the repository is just to contain and execute the raw query itself, where as the data access class would create the context, execute the repository (passing in the context), handle mapping the data model to the domain model and return the result back up... What do you guys think? Also do you see any of this changing in a Linq to XML scenario (assuming that you change the context for the relevant XDocument)? Cheers Anthony

    Read the article

  • What is design principle behind Servlets being Singleton

    - by Sandeep Jindal
    A servlet container "generally" create one instance of a servlet and different threads of the same instance to serve multiple requests. (I know this can be changed using deprecated SingleThreadModel and other features, but this is the usual way). I thought, the simple reason behind this is performance gain, as creating threads is better than creating instances. But it seems this is not the reason. On the other hand, creating instances have little advantage that developers never have to worry about thread safety. I am trying to understand the reason for this decision over the trade-off of thread-safety.

    Read the article

  • mmap() for large file I/O?

    - by Boatzart
    I'm creating a utility in C++ to be run on Linux which can convert videos to a proprietary format. The video frames are very large (up to 16 megapixels), and we need to be able to seek directly to exact frame numbers, so our file format uses libz to compress each frame individually, and append the compressed data onto a file. Once all frames are finished being written, a journal which includes meta data for each frame (including their file offsets and sizes) is written to the end of the file. I'm currently using ifstream and ofstream to do the file i/o, but I am looking to optimize as much as possible. I've heard that mmap() can increase performance in a lot of cases, and I'm wondering if mine is one of them. Our files will be in the tens to hundreds of gigabytes, and although writing will always be done sequentially, random access reads should be done in constant time. Any thoughts as to whether I should investigate this further, and if so does anyone have any tips for things to look out for? Thanks!

    Read the article

  • Is it possible to destroy a CDI scope?

    - by Matt Ball
    I'm working on a Java EE application, primarily JAX-RS with a JSF admin console, that uses CDI/Weld for dependency injection with @ApplicationScoped objects. Minor debugging issues aside, CDI has worked beautifully for this project. Now I need some very coarse-grained control over CDI-injected object lifecycles. I need the ability to: Remove an injected object from the application context, or Destroy/delete/clear/reset/remove the entire application context, or Define my own @ScopeType and implementing Context in which I could provide methods to perform one of the two above tasks. I'm fully aware that this is across, if not against, the grain of CDI and dependency injection in general. I just want to know Is this remotely possible? If yes, what is the easiest/simplest/quickest/foolproofiest way to get the job done?

    Read the article

  • Replication - syncronizing most of the data some of the time

    - by uncle brad
    I have some data that isn't properly "partitioned" (for lack of a better word). All inserts, processing and reporting happen on the same table. The bulk of the processing happens not long after the insert and not long after that it becomes immutable (we're talking days). I could do all inserts and processing on a new table that I replicate to the old table. When I detect that the data has become immutable I would delete the data from the new table, but I would edit the delete replication stored procedure so that the delete did not replicate. How bad an idea is this? It seems attractive at the moment (I haven't slept on it yet) because it might mitigate a performance problem with only very small changes to the application. It also seems like it might be a good way to shoot myself in the foot.

    Read the article

  • Using SQL Server for WSS 3.0 instead of Windows Internal database

    - by val
    Hi Folks, There are actually two related questions: is it possible or advisable to use a full blown stand-alone SQL server for SharePoint Services WSS3.0 instead of the supplied windows internal database it comes with? The client I am working for is asking to utilize their existent SQL server for all WSS content databases to possibly minimize admin effort and improve performance. As well, would you advise to install WSS on one physical server and the content database on another server? Any gain in performace? Practicality? ect. The default is WSS and all of its databases are installed on the same single server. We don't really need a farm setup of MOSS, because the WSS capabilities are enough for our needs. Thanks, Val

    Read the article

  • logging in scala

    - by IttayD
    In Java, the standard idiom for logging is to create a static variable for a logger object and use that in the various methods. In Scala, it looks like the idiom is to create a Logging trait with a logger member and mixin the trait in concrete classes. This means that each time an object is created it calls the logging framework to get a logger and also the object is bigger due to the additional reference. Is there an alternative that allows the ease of use of "with Logging" while still using a per-class logger instance? EDIT: My question is not about how one can write a logging framework in Scala, but rather how to use an existing one (log4j) without incurring an overhead of performance (getting a reference for each instance) or code complexity. Also, yes, I want to use log4j, simply because I'll use 3rd party libraries written in Java that are likely to use log4j.

    Read the article

  • How to mark empty in a single element in a float array

    - by Vineeth Mohan
    I have a large float (primitive) array and not every element in the array is filled. How can i mark a particular element as EMPTY. I understand this can be achieved by some special symbols but still i would like to know the standard way. Even if i am using some special symbol , how will i handle a situation where the actual data item is the value of special symbol. In short my question is how to implement the NULL feature in a primitive type array in java. PS - The reason why i am not using Float object is to achieve a high memory and speed performance. Thanks Vineeth

    Read the article

  • .Net Logger (Write your own vs log4net/enterprise logger/nlog etc.)

    - by Jack
    I work for an IT department with about 50+ developers. It used to be about 100+ developers but was cut because of the recession. When our department was bigger there was an ambitious effort made to set up a special architecture group. One thing this group decided to do was create our own internal logger. They thought it was such a simple task that we could spend recources and do it ourselves. Now we are having issues with performance and difficulty viewing the logs generated and some employees are frustrated that we are spending recources on infrastructure stuff like this instead of focusing on serving our business and using stuff that already exists like log4net or Enterprise Logger. Can you assist me in listing up reasons why you should not create your own .net logger. Also reasons for why you should are welcome to get a fair point of view :)

    Read the article

  • checking if records exists in DB, in single step or 2 steps?

    - by Sinan
    Suppose you want to get a record from database which returns a large data and requires multiple joins. So my question would be is it better to use a single query to check if data exists and get the result if it exists. Or do a more simple query to check if data exists then id record exists, query once again to get the result knowing that it exists. Example: 3 tables a, b and ab(junction table) select * from from a, b, ab where condition and condition and condition and condition etc... or select id from a, b ab where condition then if exists do the query above. So I don't know if there is any reason to do the second. Any ideas how this affects DB performance or does it matter at all?

    Read the article

  • Truncate a UTF-8 string to fit a given byte count in PHP

    - by fsb
    Say we have a UTF-8 string $s and we need to shorten it so it can be stored in N bytes. Blindly truncating it to N bytes could mess it up. But decoding it to find the character boundaries is a drag. Is there a tidy way? [Edit 20100414] In addition to S.Mark’s answer: mb_strcut(), I recently found another function to do the job: grapheme_extract($s, $n, GRAPHEME_EXTR_MAXBYTES); from the intl extension. Since intl is an ICU wrapper, I have a lot of confidence in it.

    Read the article

  • Alternative databases to use when putting IIS Logs into a database using LogParser

    - by Robin Day
    We have run some scripts that use LogParser to dump our IIS logs into a SQL Server database. We can then query this to get simple stats on hits, usage etc. It's also good when linking it to error log databases and performance counter database to compare usage with errors, etc. Having implemented this for just one system and for the last 2-3 weeks we already have a 5GB database with around 10 million records. This is making any queries to this database quite slow and will no doubt cause storage issues if we continue to log as we are. Can anyone suggest any alternative databases that we could use for this data that would be more efficient for such logs? I'd be particularly interested in any experience of Google's BigTable or Amazon's SimbleDB. Are either of these suitable for reporting queries? COUNTs, GROUP BYs, PIVOTs?

    Read the article

  • git: having 2 push/pull repos in sync (or 1 push/pull and 1 pull in sync)

    - by xavjuan
    Hello, We work on multiple geographically seperate sites. Today I have our git clones all live on one site A. Then users from site B have to ssh over to do a git clone or to push in changes. These are bare repos where the update is through pushes. Ideally, for git clone/push performance, I'd like to limit having to go over ssh. I'd like to have a copy of git repo X live on site A and site B... and have some syncing mechanism between them. OR to have X live on both sites, but only allow pushing to A (and have that setup correctly at clone time on B) I'm worried about the case where someone on site A pushes changes to the repo at site A at the same time that someone on site B pushes a truely conflicting change to the repo at site B. Is there some 'sync'ing solution built into git for distributed open repos like this? Or a way to have a clone from X set the origin/parent to the X from the other site? thanks, -John

    Read the article

  • MySQL - are FK's useful / viable in a web app?

    - by yoda
    Hi all, I've encountered this discussion related to FK's and web applications. Basically some people say that FK's in web applications doesn't represent a real improvement and can even make the application slower in some cases. What do you guys think, what's your experience? -- A quote from Heikki Tuuri, creator of InnoDB engine, founder and CEO of Innobase: InnoDB checks foreign keys as soon as a row is updated, no batching is performed or checks delayed till transaction commit Foreign keys are often serious performance overhead, but help maintain data consistency Foreign Keys increase amount of row level locking done and can make it spread to a lot of tables besides the ones directly updated

    Read the article

< Previous Page | 581 582 583 584 585 586 587 588 589 590 591 592  | Next Page >