Search Results

Search found 7865 results on 315 pages for 'high density'.

Page 34/315 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • check whether mmap'ed address is correct

    - by reddot
    I'm writing a high-loaded daemon that should be run on the FreeBSD 8.0 and on Linux as well. The main purpose of daemon is to pass files that are requested by their identifier. Identifier is converted into local filename/file size via request to db. And then I use sequential mmap() calls to pass file blocks with send(). However sometimes there are mismatch of filesize in db and filesize on filesystem (realsize < size in db). In this situation I've sent all real data blocks and when next data block is mapped -- mmap returns no errors, just usual address (I've checked errno variable also, it's equal to zero after mmap). And when daemon tries to send this block it gets Segmentation Fault. (This behaviour is guarantedly issued on FreeBSD 8.0 amd64) I was using safe check before open to ensure size with stat() call. However real life shows to me that segfault still can be raised in rare situtaions. So, my question is there a way to check whether pointer is accessible before dereferencing it? When I've opened core in gdb, gdb says that given address is out of bound. Probably there is another solution somebody can propose.

    Read the article

  • C# WPF abnormal CPU usage for animation

    - by 0xDEAD BEEF
    I am developing WPF application and client reports extreamly high CPU usage (90%) (whereas i am unable to repeat that behavior). I have traced bootleneck down to these lines. It is simple glowing animation for small single led control (blinking led). What could be reason for this simple annimation taking up SO huge CPU resources? <Trigger Property="State"> <Trigger.Value> <local:BlinkingLedStatus>Blinking</local:BlinkingLedStatus> </Trigger.Value> <Trigger.EnterActions> <BeginStoryboard Name="beginStoryBoard"> <Storyboard> <DoubleAnimation Storyboard.TargetName="glow" Storyboard.TargetProperty="Opacity" AutoReverse="True" From="0.0" To="1.0" Duration="0:0:0.5" RepeatBehavior="Forever"/> </Storyboard> </BeginStoryboard> </Trigger.EnterActions> <Trigger.ExitActions> <StopStoryboard BeginStoryboardName="beginStoryBoard"/> </Trigger.ExitActions> </Trigger>

    Read the article

  • How to store and collect data for mining such information as most viewed for last 24 hours, last 7 d

    - by Kirzilla
    Hello, Let's imagine that we have high traffic project (a tube site) which should provide sorting using this options (NOT IN REAL TIME). Number of videos is about 200K and all information about videos is stored in MySQL. Number of daily video views is about 1.5KK. As instruments we have Hard Disk Drive (text files), MySQL, Redis. Views top viewed top viewed last 24 hours top viewed last 7 days top viewed last 30 days top rated last 365 days How should I store such information? The first idea is to log all visits to text files (single file per hour, for example visits_20080101_00.log). At the beginning of each hour calculate views per video for previous hour and insert this information into MySQL. Then recalculate totals (for last 24 hours) and update statistics in tables. At the beginning of every day we have to do the same but recalculate for last 7 days, last 30 days, last 365 days. This method seems to be very poor for me because we have to store information about last 365 days for each video to make correct calculations. Is there any other good methods? Probably, we have to choose another instruments for this? Thank you.

    Read the article

  • Is JSF really ready to deliver high performance web applications?

    - by aklin81
    I have heard a lot of good about JSF but as far as I know people also had lots of serious complains with this technology in the past, not aware of how much the situation has improved. We are considering JSF as a probable technology for a social network project. But we are not aware of the performance scores of JSF neither we could really come across any existing high performance website that had been using JSF. People complain about its performance scalability issues. We are still not very sure if we are doing the right thing by choosing jsf, and thus would like to hear from you all about this and take your inputs into consideration. Is it possible to configure JSF to satisfy the high performance needs of social networking service ? Also till what extent is it possible to survive with the current problems in JSF.

    Read the article

  • How to stop switchable graphics from switching to high-power GPU when charging the laptop?

    - by Saifallah
    I've an Acer laptop with Windows 7 64 bit and an ATI Radeon HD 6550M Graphics card. Whenever I connect the power to charge the battery it automatically switches to the high-power GPU (ATI) instead of the low-power (Intel) GPU. There's an option in the bios to stop such thing but it makes the GPU runs always on high-power and I can't switch to the low-power GPU. How can I prevent the switchable graphics from automatically using the high-power GPU when charging?

    Read the article

  • What can be considered too high or too low volume?

    - by josinalvo
    I've asked a question about what audio volume to use when recording: recording audio: What is the best volume setting? In there, I learned that: I should avoid too high a volume, to prevent clipping I should avoid too low a volume, to prevent loss of resolution The question now is: What is too high a volume? What is too low? I am setting the volume via the GUI for sound config. It has an unamplified setting, a 100% setting, and volumes beyond 100%. After 100%, is there still resolution loss? How can I tell if there is clipping going on (given that my recording program is the non-GUI ffmpeg)?

    Read the article

  • Linux HA / cluster: what are the differences between Pacemaker, Heartbeat, Corosync, wackamole?

    - by Continuation
    Can you help me understand Linux HA? Pacemaker, Heartbeat, Corosync seem to be part of a whole HA stack, but how do they fit together? How does wackamole differ from Pacemaker/Heartbeat/Corosync? I've seen opinions that wackamole is better than Heartbeat because it's peer-based. Is that valid? The last release of wackamole was 2.5 years ago. Is it still being maintained or active? What would you recommend for HA setup for web/application/database servers?

    Read the article

  • Am I "wasting" my time learning C and other low level stuff ?

    - by Andreas Grech
    I have just recently started learning C and the reason I did that was because frankly, I consider myself to be of a "less-developer" than the people who know and work with C. Thus I planned to start learning ASM, C, C++ and bought the K&R book and started pushing myself to learn the C Programming Language and up till now I'm doing great...learning about arrays the low level way (ie the pointer + offset thing), pointers and all that and obviously asking questions on stackoverflow for guidance. My problem is that sometimes I get thinking if instead of learning this low level stuff, maybe I should maybe spend more time learning newer, more widely used technologies...basically, more web stuff. Now I am well versed with both C# and ASP.Net and currently that's what I do for a living, but still there exists Microsoft technologies that I haven't quite touched upon...such as ASP.Net MVC, The Entity Framework etc... And those are only Microsoft Technologies...obviously there are other stuff that I would like to touch upon...stuff like Ruby, which would lead me to Ruby on Rails, or Python for Django or even Java and J2EE, or maybe even PHP; ie, basically mainly Web Stuff. Mind you, I did touch upon some of the stuff I mentioned earlier on, such as PHP and Java but I am still not quite versed in them as I am in C# and ASP.Net...but still, I think that by learning other languages that are used in the web environment will broaden my horizons...both as a developer who loves learning, and also Career wise. My point is, am I really using up my time correctly by learning older, lower level stuff? Stuff that for my current line of work, will most probably never use, but still is interesting to know ? To be frankly honest, I am also learning C so that I could, maybe someday, get into Electronics and Micro-controller programming but that is a whole new world for me and, if I choose to go there, will take some time to get adjusted to. And even then, I don't know if I can get a career in working in that line of work. ...but I still wonder about this question over and over...Am I doing the right thing by learning C instead of something (Web-stuff) that will most probably be more useful for me career-wise? I'm sorry for such asking such a long and most probably a boring question, but I feel as if this is the only place where I can ask such a question and get an honest answer from experts in the field. Thank you for your time.

    Read the article

  • lowest latency, least overhead app server?

    - by Mark Harrison
    I'm designing an application which will have a network interface for feeding out large numbers of very small metadata requests. The application code itself is very fast, basically looking up data cached in memory and sending it to the client. What's the absolute lowest latency I can get for a network application server running on a linux box? This will be an internal app running on gigE with no authentication. Any language/framework considered, with a preference for C, C++, or Python. Likewise for protocol, although HTTP would be nice.

    Read the article

  • Zero Downtime with Hibernate

    - by Stephan Schmidt
    What changes to a database (MySQL in this case) does Hibernate survive (data, schema, ...)? I ask this because of zero downtime with Hibernate. Change database, split app servers into two clusters, redeploy the application on one of the clusters and switch application. Thanks Stephan

    Read the article

  • Squid handling of concurrent cache misses

    - by Oliver H-H
    We're using a Squid cache to off-load traffic from our web servers, ie. it's setup as a reverse-proxy responding to inbound requests before they hit our web servers. When we get blitzed with concurrent requests for the same request that's not in the cache, Squid proxies all the requests through to our web ("origin") servers. For us, this behavior isn't ideal: our origin servers gets bogged down trying to fulfill N identical requests concurrently. Instead, we'd like the first request to proxy through to the origin server, the rest of the requests to queue at the Squid layer, and then all be fulfilled by Squid when the origin server has responded to that first request. Does anyone know how to configure Squid to do this? We've read through the documentation multiple times and thoroughly web-searched the topic, but can't figure out how to do it. We use Akamai too and, interestingly, this is its default behavior. (However, Akamai has so many nodes that we still see lots of concurrent requests in certain traffic spike scenarios, even with Akamai's super-node feature enabled.) This behavior is clearly configurable for some other caches, eg. the Ehcache documentation offers the option "Concurrent Cache Misses: A cache miss will cause the filter chain, upstream of the caching filter to be processed. To avoid threads requesting the same key to do useless duplicate work, these threads block behind the first thread." Some folks call this behavior a "blocking cache," since the subsequent concurrent requests block behind the first request until it's fulfilled or timed-out. Thx for looking over my noob question! Oliver

    Read the article

  • How to make active services highly available?

    - by Jader Dias
    I know that with Network Load Balancing and Failover Clusteringwe can make passive services highly available. But what about active apps? Example: One of my apps retrieves some content from a external resource in a fixed interval. I have imagined the following scenarios: Run it in a single machine. Problem: if this instance falls, the content won't be retrieved Run it in each machine of the cluster. Problem: the content will be retrieved multiple times Have it in each machine of the cluster, but run it only in one of them. Each instance will have to check some sort of common resource to decide whether it its turn to do the task or not. When I was thinking about the solution #3 I have wondered what should be the common resource. I have thought of creating a table in the database, where we could use it to get a global lock. Is this the best solution? How does people usually do this? By the way it's a C# .NET WCF app running on Windows Server 2008

    Read the article

  • How to hand-over a TCP listening socket with minimal downtime?

    - by Shtééf
    While this question is tagged EventMachine, generic BSD-socket solutions in any language are much appreciated too. Some background: I have an application listening on a TCP socket. It is started and shut down with a regular System V style init script. My problem is that it needs some time to start up before it is ready to service the TCP socket. It's not too long, perhaps only 5 seconds, but that's 5 seconds too long when a restart needs to be performed during a workday. It's also crucial that existing connections remain open and are finished normally. Reasons for a restart of the application are patches, upgrades, and the like. I unfortunately find myself in the position that, every once in a while, I need to do this kind of thing in production. The question: I'm looking for a way to do a neat hand-over of the TCP listening socket, from one process to another, and as a result get only a split second of downtime. I'd like existing connections / sockets to remain open and finish processing in the old process, while the new process starts servicing new connectinos. Is there some proven method of doing this using BSD-sockets? (Bonus points for an EventMachine solution.) Are there perhaps open-source libraries out there implementing this, that I can use as is, or use as a reference? (Again, non-Ruby and non-EventMachine solutions are appreciated too!)

    Read the article

  • Should I be regularly shrinking my DB or at least my log file?

    - by Tom
    My question is, should I be running one or both of the shrink command regularly, DBCC SHRINKDATABASE OR DBCC SHRINKFILE ============================= background Sql Server: Database is 200 gigs, logs are 150 gigs. running this command SELECT name ,size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int) / 128.0 AS AvailableSpaceInMB FROM sys.database_files;` produces this output.. MyDB: 159.812500 MB free MyDB_Log: 149476.390625 MB free So it seems there is some free space. We backup transaction logs every hour, diff backup 5 nights a week, full backup the other 2 nights of the week.

    Read the article

  • Best scaling methodologies for a highly traffic web application?

    - by tester2001
    We have a new project for a web app that will display banners ads on websites (as a network) and our estimate is for it to handle 20 to 40 billion impressions a month. Our current language is in ASP...but are moving to PHP. Does PHP 5 has its limit with scaling web application? Or, should I have our team invest in picking up JSP? Or, is it a matter of the app server and/or DB? We plan to use Oracle 10g as the database.

    Read the article

  • HA with nginx and cloud environment

    - by gotts
    I have a node in cloud environment which is used now as nginx and mongrels behind it. This is what nginx config looks like: upstream mongrel { server 127.0.0.1:8000; server 127.0.0.1:8001; server 127.0.0.1:8002; } I want to achieve the following: add another node nginx has to know about this new node automatically without stopping him, changing config(manually adding new node's mongrels) and starting it again. How can I make my load balancer(nginx) work in the way so it can be self-aware of nodes in cloud?

    Read the article

  • Persisting high score table in flash game without a network. (Featuring: HttpListenerException)

    - by bearcdp
    Hi everyone, this question is very programming-centric, but it's for a game so I figured I might as well post it here. I'm doing polishing work on a GGJ '11 game because it will be shown at an indie arcade tomorrow afternoon, and they're expecting our final build in the morning. We'd like to have a high score table that displays during attract mode, but since it's Flash (Flixel) it would require some networking, Mochi, or something to keep a record of these scores. Only problem is the machine we'd be running on probably won't have network access. As a quick solution, I thought I'd just write up a dinky little high score server in C#/.NET that could take basic GET requests for submitting scores and getting the score list. We're talking REAL basic, like blocking while waiting for an incoming request, run & forget console app, etc. There's no guarantee that our .swf won't get reloaded, and we'd like the scores to persist, so this server would pretty much exists to keep a safe copy of the scores that the game can add to and request, and occasionally the server will write the scores to a flat text file. But, HttpListener is giving me Error Code 87 'The parameter is incorrect.' Have any idea what I'm doing wrong? Or better yet, am I barking up the wrong tree and missing an obviously simpler solution? This is all I've got so far in my Main(): HttpListener listener = new HttpListener(); listener.Prefixes.Add("http://localhost:66666/"); listener.Start(); The exception happens at listener.Start(); and the stack trace is: at System.Net.HttpListener.AddAllPrefixes() at System.Net.HttpListener.Start() at WOSEBCE_ScoreServer.Program.Main(String[] args) in C:\Users\Michael\Documents\Visual Studio 2010\VS2010 Projects\WOSEBCE_ScoreServer\WOSEBCE_ScoreServer\Program.cs:line 24 at System.AppDomain._nExecuteAssembly(RuntimeAssembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart()

    Read the article

  • Python - calculate multinomial probability density functions on large dataset?

    - by Seafoid
    Hi, I originally intended to use MATLAB to tackle this problem but the inbuilt functions has limitations that do not suit my goal. The same limitation occurs in NumPy. I have two tab-delimited files. The first is a file showing amino acid residue, frequency and count for an in-house database of protein structures, i.e. A 0.25 1 S 0.25 1 T 0.25 1 P 0.25 1 The second file consists of quadruplets of amino acids and the number of times they occur, i.e. ASTP 1 Note, there are 8,000 such quadruplets. Based on the background frequency of occurence of each amino acid and the count of quadruplets, I aim to calculate the multinomial probability density function for each quadruplet and subsequently use it as the expected value in a maximum likelihood calculation. The multinomial distribution is as follows: f(x|n, p) = n!/(x1!*x2!*...*xk!)*((p1^x1)*(p2^x2)*...*(pk^xk)) where x is the number of each of k outcomes in n trials with fixed probabilities p. n is 4 four in all cases in my calculation. I have created three functions to calculate this distribution. # functions for multinomial distribution def expected_quadruplets(x, y): expected = x*y return expected # calculates the probabilities of occurence raised to the number of occurrences def prod_prob(p1, a, p2, b, p3, c, p4, d): prob_prod = (pow(p1, a))*(pow(p2, b))*(pow(p3, c))*(pow(p4, d)) return prob_prod # factorial() and multinomial_coefficient() work in tandem to calculate C, the multinomial coefficient def factorial(n): if n <= 1: return 1 return n*factorial(n-1) def multinomial_coefficient(a, b, c, d): n = 24.0 multi_coeff = (n/(factorial(a) * factorial(b) * factorial(c) * factorial(d))) return multi_coeff The problem is how best to structure the data in order to tackle the calculation most efficiently, in a manner that I can read (you guys write some cryptic code :-)) and that will not create an overflow or runtime error. To data my data is represented as nested lists. amino_acids = [['A', '0.25', '1'], ['S', '0.25', '1'], ['T', '0.25', '1'], ['P', '0.25', '1']] quadruplets = [['ASTP', '1']] I initially intended calling these functions within a nested for loop but this resulted in runtime errors or overfloe errors. I know that I can reset the recursion limit but I would rather do this more elegantly. I had the following: for i in quadruplets: quad = i[0].split(' ') for j in amino_acids: for k in quadruplets: for v in k: if j[0] == v: multinomial_coefficient(int(j[2]), int(j[2]), int(j[2]), int(j[2])) I haven'te really gotten to how to incorporate the other functions yet. I think that my current nested list arrangement is sub optimal. I wish to compare the each letter within the string 'ASTP' with the first component of each sub list in amino_acids. Where a match exists, I wish to pass the appropriate numeric values to the functions using indices. Is their a better way? Can I append the appropriate numbers for each amino acid and quadruplet to a temporary data structure within a loop, pass this to the functions and clear it for the next iteration? Thanks, S :-)

    Read the article

  • High Salaried Investment Banking Jobs for Developers — What are the pitfalls?

    - by Jaywalker
    This question might make more sense to somebody having multi-threaded programming experience in Java/ C++ with some job experience in London / Singapore. There is a huge market of Investment Banking development jobs with astonishingly high salaries (sometimes more than 100K pounds per year). Can someone with experience as a front office/trading developer tell what are the requirements to land this type job? What are the downside that i should be ready for?

    Read the article

  • Who Really Contributed the High-End Tech to Project Monterey?

    <b>Groklaw:</b> "Here's something interesting, a Santa Cruz 8K from October 26, 1998, which consists mostly of two press releases announcing the IBM-SCO joint partnership to do Project Monterey. Guess who would be providing the bulk of the high-end enterprise capabilities and contributing them to UnixWare? Hint: Not SCO"

    Read the article

  • From the Tips Box: Xbox Output on Two Screens, High Tech Halloween Props, and Old Flash Drives as Password Reset Disks

    - by Jason Fitzpatrick
    Once a week we round up some great reader tips and share them with everyone, this week we’re looking at outputting your Xbox 360 to two screens, spooky high-tech Halloween props, and recycling old flash drives as password reset disks. HTG Explains: What is the Windows Page File and Should You Disable It? How To Get a Better Wireless Signal and Reduce Wireless Network Interference How To Troubleshoot Internet Connection Problems

    Read the article

  • Will high reputation in Stack Overflow help to get a good job?

    - by Shamim Hafiz
    In a post, Joel Spolsky mentioned that 5 digit StackOverflow reputation can help you to earn a job paying $100k+. How much of that is real? Would anyone like to share their success in getting high paid job by virtue of their reputations on StackExchange sites? I read somewhere that, a person got Interview offer in Google because a recruiter found his Stackoverflow reputation to be impressive. Anyone else with similar stories?

    Read the article

  • How to Get High PR Backlinks at ZERO Cost!

    We all know that having quality links is one of the most important aspects of SEO and here is how to get high PR backlinks at no cost to you at all! Now, first a bit of information most people overlook. There are two type of links you can get: do follow links and no follow links.

    Read the article

  • High-res icon in Windows Vista alt-tab thumbnail preview?

    - by netvope
    I have customized my alt-tab screen with the following: [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\AltTab] "OverlayIconPx"=dword:00000040 "MaxThumbSizePx"=dword:00000100 "MinThumbSizePcent"=dword:00000064 It works great: the thumbnail becomes 256 pixel wide and the icon at the corner of the thumbnail becomes 64x64 pixels. However, Windows doesn't load the high-res icons from the programs; instead, it uses the 16x16 pixel icon and scaled it up by nearest-neighbor. I'm sure the programs has high-res icons because I saw them with in "Extra Large Icon" view in Explorer. So the question is: How can I force Windows to load the high-res icons for the alt-tab thumbnail preview? (Perhaps a registry key, or a .dll hack/injection?)

    Read the article

  • 32bit SQLServer with AWE NOT enabled. Buffer Cache Hit Ratio High, Disk Read Queue VERY HIGH, WHY?

    - by chenwq
    We have a "SQLServer 2005 SP3 32bit Enterprise Edition" running on a 32 bit Windows 2003 32bit Enterprise Edition 12GB RAM with AWE enabled using RAID5(5 pysical disks). We tuned AWE to enabled and restart sqlserver this afternoon after work, hope the performance will be better than old time. But there is something that we are very confused. On working days, SQLServer has a very bad performance. When we are looking for reasons, we check Windows Performance counter. Avg. Disk Read Queue Lenght > 140 Avg. Disk Write Queue Length < 1 SQL Server Buffer Cache Hit Ratio > 96% %Processor Time < 30% SQL Server Total Server Memory < 1.8G Obviously, without AWE enabled, SQL Server can use only less than 2G memory. My Question is: why "SQL Server Total server Memory" is less than 2G?I think SQL Server will use all 2G process address space. Does this counter count anything out? we known that sql server is sufferring lack of memory, but why "buffer hit ratio“ is as high as 96? Any advice is welcomed!

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >