Search Results

Search found 31421 results on 1257 pages for 'software performance'.

Page 171/1257 | < Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >

  • JFFS2 poor mount performance

    - by Marcin Polkowski
    I run multiple ARM boards with Debian Linux installed. Board is equipped with 512 MB of NAND memory. I've observed that after ~3 months of continuous run booting time increased significantly - it takes over 3 minutes to mount filesystem (JFFS2). System was using about 35% of available storage so I’ve removed unnecessary files (got to ~18%) but this didn't change anything. Then I realized that my software produces directories that are left empty so I’ve removed ~500 empty and unnecessary dirs. This didn’t help either. After system is started I see JFFS2 garbage collector (jffs2_gcd_mtd4) running and occupying over 90% of CPU. Now my question: is there a way to „optimize” JFFS2 filesystem for better performance - faster booting (my system have limited timid to boot up)? It would be great if this optimization could be done remotely - I have no physical access to boards.

    Read the article

  • xinet vs iptables for port forwarding performance

    - by jamie.mccrindle
    I have a requirement to run a Java based web server on port 80. The options are: Web proxy (apache, nginx etc.) xinet iptables setuid The baseline would be running the app using setuid but I'd prefer not to for security reasons. Apache is too slow and nginx doesn't support keep-alives so new connections are made for every proxied request. xinet is easy to set up but creates a new process for every request which I've seen cause problems in a high performance environment. The last option is port forwarding with iptables but I have no experience of how fast it is. Of course, the ideal solution would be to do this on a dedicated hardware firewall / load balancer but that's not an option at present.

    Read the article

  • recyle application pool,Warm up scripts-Performance tuning in Sharepoint WCM site

    - by joel14141
    I was trying to tune WCM public facing site we have in Sharepoint . I have following doubts By default application pools are set to recycle themselves at 2 am in night and because of that we need warm up scripts . But As I was googling on this topic I found mixed reactions on this some MVP are saying its not advisable to recycle application pool daily and some say otherwise so I am confused. Because if I am not doing recycling application pool then I don't hv to use warmup scripts . But as my site is public facing and its all around the globe so is it advisable that I should recycle it daily as it will affect the performance of my site even though I would run warm up scripts once I don't think so it wud be as good as it should be ....Any advice on that?

    Read the article

  • recyle application pool,Warm up scripts-Performance tuning in Sharepoint WCM site

    - by joel14141
    I was trying to tune WCM public facing site we have in Sharepoint . I have following doubts By default application pools are set to recycle themselves at 2 am in night and because of that we need warm up scripts . But As I was googling on this topic I found mixed reactions on this some MVP are saying its not advisable to recycle application pool daily and some say otherwise so I am confused. Because if I am not doing recycling application pool then I don't hv to use warmup scripts . But as my site is public facing and its all around the globe so is it advisable that I should recycle it daily as it will affect the performance of my site even though I would run warm up scripts once I don't think so it wud be as good as it should be ....Any advice on that?

    Read the article

  • How can a software agency deliver quality software/win projects?

    - by optician
    I currently work for a bespoke software agency. Does anyone have any experience of how to win well priced work? It seems there is so much competition from offshore/bedroom program teams, that cost is extremely competetive these days. I feel that it is very different compared to a software product company or an internal it department, in terms of budget. As someone else said before, we only ever really get to version 1.0 of a lot of our software, unless the client is big enough. In which case it doesn't make business sense to spend ages making the software the best we can. Its like we are doing the same quality of work of internal it. Also a Lot of our clients are not technically minded and so therefor will not pay for things they don't understand. As our company does not have the money to turn down work it often goes that we take on complicated work for far too little money. I have got a lot better at managing change and keeping tight specs etc. It is still hard.

    Read the article

  • A Newbie question regarding Software Development

    - by Sharif
    Hi, I'm going to complete my B.pharm (Hons.) degree and, you know, I don't have much knowledge about programing. I was wondering to build a software on my own. Could you guys tell me what to learn first for that? Is it too hard for a student of other discipline to build a software? Let me know please. The software I want to make is like a dictionary (or more specifically like "Physician's Desk Reference"). It should find the generic name, company name, indication, price etc. of a drug when I enter the brand name and vice versa. To build a software like that what programing language could help me most and what (and how many) language should I learn first? In my country, there is no practice of Community pharmacy (most of the pharmacy stores are run by unskilled people), that's why this type of thing could help them sell drugs. Would you please tell me what I'm to do and how tough it is? I'm very keen to learn programming. Thanks in advance NB: I started this post in ASKREDDIT section but it seems that was not the right place for poll type question, so I post it again in this section

    Read the article

  • How do you demonstrate performance in paired-programming environments?

    - by NT3RP
    Performance reviews have come up recently at my work, and I was put in an interesting position. Our team does a lot of pair programming, which has a tendency of averaging out the skill differences between team members (especially considering we rotate pairs). Generally, when doing performance reviews, you look back at the work you've done, and demonstrate what you've accomplished, and how you've exceeded expectations to try to negotiate a raise or other benefits. How do you demonstrate (or even measure) individual performance in an environment like this?

    Read the article

  • how to make a software and preserve database integrity and correctness and please help confused

    - by user287745
    i have made an application project in vs 08 c#, sql server from vs 08. the database has like 20 tables and many fields in each have made an interface for adding deleting editting and retrieving data according to predefined needs of the users. now i have to 1) make to project in to a software which i can deliver to professor. that is he can just double click the icon and the software simply starts. no vs 08 needed to start the debugging 2) the database will be on one powerful computer (dual core latest everything win xp) and the user will access it from another computer connected using LAN i am able to change the connection string to the shared database using vs 08/ debugger whenever the server changes but how am i supposed to do that when its a software? 3)there will by many clients am i supposed to give the same software to every one, so they all can connect to the database, how will the integrity and correctness of the database be maintained? i mean the db.mdf file will be in a folder which will be shared with read and write access. so its not necessary that only one user will write at a time. so is there any coding for this or? please help me out here i am stuck do not know what to do i have no practical experience, would appreciate all the help thank you

    Read the article

  • Which JavaScript graphics library has the best performance?

    - by DNS
    I'm doing some research for a JavaScript project where the performance of drawing simple primitives (i.e. lines) is by far the top priority. The answers to this question provide a great list of JS graphics libraries. While I realize that the choice of browser has a greater impact than the library, I'd like to know whether there are any differences between them, before choosing one. Has anyone done a performance comparison between any of these?

    Read the article

  • server performance metrics report and practicality

    - by Anjesh
    I have a need of preparing web server (apache-php) performance report containing important metrics like CPU usage, disk io, memory usage on user basis. Couple of domains are hosted in the same server and they run from separate users using fcgi. The reason being sometimes some hosted applications take lots of cpu usage, making the server slow for other applications (running as separate users). i am planning to develop scripts for this, as i can't seem to find any simple utilities for this purpose. This script will take snapshots of the user wise metrics at defined periods say 15 minutes and record it. Any abnormalities will be reported via emails. How practical is that? also would be interesting to know what else need to be recorded.

    Read the article

  • From HttpRuntime.Cache to Windows Azure Caching (Preview)

    - by Jeff
    I don’t know about you, but the announcement of Windows Azure Caching (Preview) (yes, the parentheses are apparently part of the interim name) made me a lot more excited about using Azure. Why? Because one of the great performance tricks of any Web app is to cache frequently used data in memory, so it doesn’t have to hit the database, a service, or whatever. When you run your Web app on one box, HttpRuntime.Cache is a sweet and stupid-simple solution. Somewhere in the data fetching pieces of your app, you can see if an object is available in cache, and return that instead of hitting the data store. I did this quite a bit in POP Forums, and it dramatically cuts down on the database chatter. The problem is that it falls apart if you run the app on many servers, in a Web farm, where one server may initiate a change to that data, and the others will have no knowledge of the change, making it stale. Of course, if you have the infrastructure to do so, you can use something like memcached or AppFabric to do a distributed cache, and achieve the caching flavor you desire. You could do the same thing in Azure before, but it would cost more because you’d need to pay for another role or VM or something to host the cache. Now, you can use a portion of the memory from each instance of a Web role to act as that cache, with no additional cost. That’s huge. So if you’re using a percentage of memory that comes out to 100 MB, and you have three instances running, that’s 300 MB available for caching. For the uninitiated, a Web role in Azure is essentially a VM that runs a Web app (worker roles are the same idea, only without the IIS part). You can spin up many instances of the role, and traffic is load balanced to the various instances. It’s like adding or removing servers to a Web farm all willy-nilly and at your discretion, and it’s what the cloud is all about. I’d say it’s my favorite thing about Windows Azure. The slightly annoying thing about developing for a Web role in Azure is that the local emulator that’s launched by Visual Studio is a little on the slow side. If you’re used to using the built-in Web server, you’re used to building and then alt-tabbing to your browser and refreshing a page. If you’re just changing an MVC view, you’re not even doing the building part. Spinning up the simulated Azure environment is too slow for this, but ideally you want to code your app to use this fantastic distributed cache mechanism. So first off, here’s the link to the page showing how to code using the caching feature. If you’re used to using HttpRuntime.Cache, this should be pretty familiar to you. Let’s say that you want to use the Azure cache preview when you’re running in Azure, but HttpRuntime.Cache if you’re running local, or in a regular IIS server environment. Through the magic of dependency injection, we can get there pretty quickly. First, design an interface to handle the cache insertion, fetching and removal. Mine looks like this: public interface ICacheProvider {     void Add(string key, object item, int duration);     T Get<T>(string key) where T : class;     void Remove(string key); } Now we’ll create two implementations of this interface… one for Azure cache, one for HttpRuntime: public class AzureCacheProvider : ICacheProvider {     public AzureCacheProvider()     {         _cache = new DataCache("default"); // in Microsoft.ApplicationServer.Caching, see how-to      }         private readonly DataCache _cache;     public void Add(string key, object item, int duration)     {         _cache.Add(key, item, new TimeSpan(0, 0, 0, 0, duration));     }     public T Get<T>(string key) where T : class     {         return _cache.Get(key) as T;     }     public void Remove(string key)     {         _cache.Remove(key);     } } public class LocalCacheProvider : ICacheProvider {     public LocalCacheProvider()     {         _cache = HttpRuntime.Cache;     }     private readonly System.Web.Caching.Cache _cache;     public void Add(string key, object item, int duration)     {         _cache.Insert(key, item, null, DateTime.UtcNow.AddMilliseconds(duration), System.Web.Caching.Cache.NoSlidingExpiration);     }     public T Get<T>(string key) where T : class     {         return _cache[key] as T;     }     public void Remove(string key)     {         _cache.Remove(key);     } } Feel free to expand these to use whatever cache features you want. I’m not going to go over dependency injection here, but I assume that if you’re using ASP.NET MVC, you’re using it. Somewhere in your app, you set up the DI container that resolves interfaces to concrete implementations (Ninject call is a “kernel” instead of a container). For this example, I’ll show you how StructureMap does it. It uses a convention based scheme, where if you need to get an instance of IFoo, it looks for a class named Foo. You can also do this mapping explicitly. The initialization of the container looks something like this: ObjectFactory.Initialize(x =>             {                 x.Scan(scan =>                         {                             scan.AssembliesFromApplicationBaseDirectory();                             scan.WithDefaultConventions();                         });                 if (Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.IsAvailable)                     x.For<ICacheProvider>().Use<AzureCacheProvider>();                 else                     x.For<ICacheProvider>().Use<LocalCacheProvider>();             }); If you use Ninject or Windsor or something else, that’s OK. Conceptually they’re all about the same. The important part is the conditional statement that checks to see if the app is running in Azure. If it is, it maps ICacheProvider to AzureCacheProvider, otherwise it maps to LocalCacheProvider. Now when a request comes into your MVC app, and the chain of dependency resolution occurs, you can see to it that the right caching code is called. A typical design may have a call stack that goes: Controller –> BusinessLogicClass –> Repository. Let’s say your repository class looks like this: public class MyRepo : IMyRepo {     public MyRepo(ICacheProvider cacheProvider)     {         _context = new MyDataContext();         _cache = cacheProvider;     }     private readonly MyDataContext _context;     private readonly ICacheProvider _cache;     public SomeType Get(int someTypeID)     {         var key = "somename-" + someTypeID;         var cachedObject = _cache.Get<SomeType>(key);         if (cachedObject != null)         {             _context.SomeTypes.Attach(cachedObject);             return cachedObject;         }         var someType = _context.SomeTypes.SingleOrDefault(p => p.SomeTypeID == someTypeID);         _cache.Add(key, someType, 60000);         return someType;     } ... // more stuff to update, delete or whatever, being sure to remove // from cache when you do so  When the DI container gets an instance of the repo, it passes an instance of ICacheProvider to the constructor, which in this case will be whatever implementation was specified when the container was initialized. The Get method first tries to hit the cache, and of course doesn’t care what the underlying implementation is, Azure, HttpRuntime, or otherwise. If it finds the object, it returns it right then. If not, it hits the database (this example is using Entity Framework), and inserts the object into the cache before returning it. The important thing not pictured here is that other methods in the repo class will construct the key for the cached object, in this case “somename-“ plus the ID of the object, and then remove it from cache, in any method that alters or deletes the object. That way, no matter what instance of the role is processing the request, it won’t find the object if it has been made stale, that is, updated or outright deleted, forcing it to attempt to hit the database. So is this good technique? Well, sort of. It depends on how you use it, and what your testing looks like around it. Because of differences in behavior and execution of the two caching providers, for example, you could see some strange errors. For example, I immediately got an error indicating there was no parameterless constructor for an MVC controller, because the DI resolver failed to create instances for the dependencies it had. In reality, the NuGet packaged DI resolver for StructureMap was eating an exception thrown by the Azure components that said my configuration, outlined in that how-to article, was wrong. That error wouldn’t occur when using the HttpRuntime. That’s something a lot of people debate about using different components like that, and how you configure them. I kinda hate XML config files, and like the idea of the code-based approach above, but you should be darn sure that your unit and integration testing can account for the differences.

    Read the article

  • How to measure startup time and order of Windows services on boot?

    - by djangofan
    I am not asking how to measure server startup time here. I am wondering if anyone knows of a tool that can measure and show a graph of the startup time and order of all the windows services during system startup. I saw a software program shown on my local Portland news last week that does this but I am unable to remember what it was called or anything else about it. All I remember is that it was a "tech" news story to help computer users with their computers. So, I know the software exists and I am trying to find it.

    Read the article

  • Wireless performance on Ubuntu 9.10

    - by Brian
    Is there something I should do to my networking configuration in Ubuntu to better the performance of my wireless connection? I'm on a netbook dual-booting Windows 7 and Ubuntu 9.10. I pick up much stronger wifi signal when in Windows than Ubuntu. As soon as I boot Ubuntu, it will connect to the network with a stronger signal, and then loses signal very quickly. After it dies, I can't reconnect. I've tested this on a couple of different networks with the same outcome.

    Read the article

  • RPC command to initiate a software install

    - by ericmayo
    I was recently working with a product from Symantech called Norton EndPoint protection. It consists of a server console application and a deployment application and I would like to incorporate their deployment method into a future version of one of my products. The deployment application allows you to select computer workstations running Win2K, WinXP, or Win7. The selection of workstations is provided from either AD (Active Directory) or NT Domain (WINs/DNS NetBIOS lookup). From the list, one can click and choose which workstations to deploy the end point software which is Symantech's virus & spyware protection suite. Then, after selecting which workstations should receive the package, the software copies the setup.exe program to each workstation (presumable over the administrative share \pcname\c$) and then commands the workstation to execute setup.exe resulting in the workstation installing the software. I really like how their product works but not sure what they are doing to accomplish all the steps. I've not done any deep investigations into this such as sniffing the network, etc... and wanted to check here to see if anyone is familiar with what I'm talking about and if you know how it's accomplished or have ideas how it could be accomplished. My thinking is that they are using the admin share to copy the software to the selected workstations and then issuing an RPC call to command the workstation to do the install. What's interesting is that the workstations do this without any of the logged in users knowing what's going on until the very end where a reboot is necessary. At which point, the user gets a pop-up asking to reboot now or later, etc... My hunch is that the setup.exe program is popping this message. To the point: I'm looking to find out the mechanism by which one Windows based machine can tell another to do some action or run some program. My programming language is C/C++ Any thoughts/suggestions appreciated.

    Read the article

  • I've got my Master's in Software Engineering... Now what? [closed]

    - by Brian Driscoll
    Recently I completed a Master of Science in Software Engineering from Drexel University (Philadelphia, PA, US), because I wanted to have some formal education in software (my undergrad is in Math Ed) and also because I wanted to be able to advance my career beyond just programming. Don't get me wrong; I love to code. I spend a lot of my spare time coding. However, for me writing code is just a means to an end: what I REALLY love is designing software. Not visual design, mind you, but the architecture of the system. So, ideally I'd like to try to get a job doing software architecture. The problem is that I have no real experience in it besides my graduate course work. So, what should I do to make my "bones" in software architecture? UPDATE Just so it's clear, I have over 5 years of work experience in software development and an MCTS cert in addition to my education, so I'm not looking for the usual "I'm fresh out of school, what should I do?" advice.

    Read the article

  • Where can I get software-related legal advice?

    - by musicfreak
    Whenever someone asks a legal question here on SO, the response is usually something along the lines of "we are not lawyers." Okay, that's legitimate, but in that case, how can I talk to a lawyer about software-related legal matters? I could look through the phone book and find a local lawyer, but then I have no way of knowing whether the lawyer knows anything about software. (And I hear most local lawyers charge for your time, even if it's just a simple question.) Is there maybe some kind of online service for this sort of thing? For now, I'm just looking for some basic advice, so something free would be awesome, even if the "quality" is not as good. However, I'll still take any kind of paid services--I'll keep them in mind for the future. You can give me anything from a forum or QA site (like this one) to a professional service. Just remember that I'm looking specifically for software-related legal advice. I'm sure most lawyers know a thing or two about software, but I'd rather talk to someone who legitimately knows his stuff than someone who can only guess.

    Read the article

  • Performance & Security Factors of Symbolic Links

    - by Stoosh
    I am thinking about rolling out a very stripped down version of release management for some PHP apps I have running. Essentially the plan is to store each release in /home/release/1.x etc (exported from a tag in SVN) and then do a symlink to /live_folder and change the document root in the apache config. I don't have a problem with setting all this up (I've actually got it working at the moment), however I'm a developer with just basic knowledge of the server admin side of things. Is there anything I need to be aware of from a security or performance perspective when using this method of release management? Thanks

    Read the article

  • What can I do in order to inform users of potential errors in my software in order to minimize liability?

    - by phobitor
    I'm an independent software developer that's spent the last few months creating software for viewing and searching map data. The software has some navigation functionality as well (mapping, directions,etc). The eventual goal is to sell it in mobile app markets. I use OpenStreetMap as my data source. I'm concerned about liability for erroneous map data / routing instructions, etc that might result when someone uses the application. There are a lot of stories on the internet where someone gets into an accident or gets stuck or gets lost because of their GPS unit/Google Maps/mapping app... I myself have come across incorrect map data as well in a GPS unit I have in my car. While I try to make my own software as bug free as possible, no software is truly bug free. And moving beyond what I can control, OpenStreetMap data (and street map data in general) is prone to errors as well. What steps can I take to clearly inform the user that results from the software aren't always perfect, and to minimize my liability?

    Read the article

  • Should I write my own forum software?

    - by acidzombie24
    I have already built a site from scratch. It has banning, PM, comments, etc. The PMs and comments are done using markdown (like SO). There are pros and cons for writing my own or using another software. But some cons keeping me from using another forum software is Multiple Logins: One for the site, one for separate forums. Need to Customization code: I'll need to change the toolbar in the forum software so I can access pages on the regular site. Look consistency: It may look drastically different from my site even after applying lots of css changes. Banning and User consistency. Users may be ban on site or on forums but not the other. users may select a different or multiple usernames on the forum instead of being forced to use the same username on both site and forum. Should I write my own forum code or should I use something already written? What are some reasons for or against writing my own and using forum software?

    Read the article

  • Experience with Intel X25-M 160GB and Oracle

    - by derobert
    We're considering building an Oracle database with 12 Intel X25-M G2 160GB drives in software RAID10. It'd be running Linux. Database gets some very heavy write activity during the early morning data load, other than that it is mostly read-only (and the read load is fairly minimal). We're currently running on 11 150GB Velociraptors (also Linux software RAID10), and are hoping the X25-M will speed up the data load. We currently have redo on different disks than the rest of the data. I'm wondering a few things: Any experience with using X25-M drives for databases? The X25-E are unfortunately beyond our budget. Would it hurt to separate redo off to some magnetic (non-SSD) drives, say 2 (raid1) or 4 (raid10) Seagate Constellations?

    Read the article

  • Thunderbird very slow with Gmail

    - by koskoz
    I'm using the latest version of Thunderbird with 3 Gmail accounts. Every time I launch it it seems it's downloading all my messages again. I've compacted folders (is the action working for the 3 accounts or do I need to do it for each of them?) and deleted the .msc files but nothing change. It leads to a software using a lot of bandwidth and being very slow when using it. It's a pain to write a message or even to view one. All the software is so slow I've never seen that it's almost unusable. I'm using thses addons : Dictionary Google Calendar Lightning My Gmail accounts are configured to imap.

    Read the article

  • Best Embedded SQL DB for write performance?

    - by max.minimus
    Has anybody done any benchmarking/evaluation of the popular open-source embedded SQL DBs for performance, particularly write performance? I've some 1:1 comparisons for sqlite, Firebird Embedded, Derby and HSQLDB (others I am missing?) but no across the board comparisons... Also, I'd be interested in the overall developer experience for any of these (for a Java app).

    Read the article

  • What is the process of planning software called? Or what is the job title of someone who does software planning?

    - by Ryan
    For example, let's say a non-technical person comes to me with their rough initial specification. And I sit down with them over a couple weeks and help them hone, formalize and better plan the application that they want built. What is this called? Information architecture, software architecture, specification writing, software planning, requirements analysis? What is the best, most recognizable term for this?

    Read the article

  • In Windows 7, why can't I use perfmon against a remote server?

    - by SomeGuy
    I am on Windows 7 and trying to run perfmon against Windows 2003 and Windows 2008 servers. I am running into the same issue with all remote machines. When creating a data collector set, I specify a domain account that is in the administrators group on the remote machines (and "Performance Log Users" and "Performance Monitor Users" to be safe). On the "Available Counters" screen, When I type in a remote computer name, PerfMon locks up for a good 2-3 minutes before I can add any counters. I can then save the collector set. However, when I save it, the go/stop buttons are disabled if I click the set in the left panel, and missing if I click the Data collector set itself in the right panel. See the screens below. I can run data collector sets against my local machine with no problem. I am opening perfmon with my local account in both scenarios. I also have Remote Registry Service started on each remote machine. What is going on?

    Read the article

< Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >