Search Results

Search found 60072 results on 2403 pages for 'application performance'.

Page 200/2403 | < Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >

  • scp vs netatalk, samba, and/or vsftpd with External USB drive

    - by KitsuneYMG
    I set up a ubuntu server machine to share an ext2 formatted external usb drive. When attempting to copy a single 275MB files from said device through netatalk, I get estimated download rates at around 45 min. With samba and ftp (using vsftpd) I get 1+ hours! Using scp to copy the file results in complete download within 5 minutes. Another option, ssh+cp from external device to ~ and then using netatalk to grab it from there results in a total time of arounf 7 minutes. Does anyone have a clue what is misconfigured? Assuming that nothing is, is there any fs/pseudo-fs that would use the internal hdd as an intermediate location/onion-layer for the external hdd (for reads only)? Details: AppleVolumes.default: /mnt/ext USB allow:username cnidscheme:cdb options:usedots,upriv

    Read the article

  • Zabbix - Some of the monitored items dont get refreshd. how to find the reason?

    - by Niro
    I'm experiencing a strange issue with Zabbix monitoring a MySQL server. Most of the data from the server such as MySQL queries per second and MySQL uptime , Buffers memory etc. update nicely while some data like CPU iowait time (avg1) , Host local time ,MySQL number of threads and other items which were monitored in the past has last check time of about a week ago. I can't find any logic in this, for example Mysql number of threads and Mysql queries per second are obtained in a similar way so it does not make sense one of them is monitored and one is not. Please help- how can I fix this?

    Read the article

  • Almost Realtime Data and Web application

    - by Chris G.
    I have a computer that is recording 100 different data points into an OPC server. I've written a simple OPC client that can read all of this data. I have a front-end website on a different network that I would like to consume this data. I could easily set the OPC client to send the data to a SQL server and the website could read from it, but that would be a lot of writes. If I wanted the data to be updated every 10 seconds I'd be writing to the database every 10 seconds. (I could probably just serialize the 100 points to get 1 write / 10 seconds but that would also limit my ability to search the data later). This solution wouldn't scale very well. If I had 100 of these computers the situation would quickly grow out of hand. Obviously I am well out of my league here and I have no experience with working with a large amount of data like this. What are my options and what should I research?

    Read the article

  • How do I disable the global application menu?

    - by Michael Ekstrand
    I'm fairly excited for Unity, as it looks like a promising new direction for Ubuntu. However, I do have a concern - will it be possible to use Unity without the global menu? I have my window manager set to focus-follows-mouse/sloppy focus, and find the productivity gains to be immense. Sloppy focus is incompatible, however, with global menus, as it is possible for the focus to change while you move from window to menu. Will Unity support an option to use window menus while still using Unity?

    Read the article

  • Is it important to obfuscate C++ application code?

    - by user827992
    In the Java world, it seems to sometimes be a problem, but, what about C++? Are there different solutions? I was thinking about the fact that someone can replace the C++ library of a specific OS with a different version of the same library, but full of debug symbols to understand what my code does. IS tt a good thing to use standard or popular libraries? This can also happen with some dll library under Windows replaced with the "debug version" of that library. Is it better to prefer static compilation? In commercial applications, I see that for the core of their app they compile everything statically and for the most part the dlls (dynamic libraries in general) are used to offer some third party technologies like anti-piracy solutions (I see this in many games), GUI library (like Qt), OS libraries, etc. Is static compilation the equivalent to obfuscation in the Java world? In better terms, is it the best and most affordable solution to protect your code?

    Read the article

  • How to rewrite to a virtual directory with a different application

    - by user42181
    Hi, I have a CMS application that manages multiple websites, today whenever i change the codebehind of one of these websites - i have to rebuild the dll for all websites, deploy it - this disconnects all current sessions and is really bad. The iis is configured to listen to all domain requests, if the request is to one of the websites' domain , the application rewrites it, or example, if someone requests for www.example.com, and example.com is configured in the application to be website 12, it is rewritten to www.example.com/websites/12/default.aspx. This is done for all websites. We want to seperate the dlls of the websites from each other, and from the main CMS, we have a virtual directory to each websites, but when trying to rewrite to it, we discover that IIS support this (we get an "Could not load type '_12._Default'". error). How can we perform this rewrite so it does rewrite to virtual directories, or if anyone has any other solution for the initial dll seperation problem. Thanks in advance

    Read the article

  • SQL server 2000 reporting bad values to ASP.Net Application

    - by Ben
    I have an instance of SQL server 2000 (8.0.2039) with a rather simple table. We recently had users complain about an application I wrote returning bad values for some of the dates in the databse. When I query the table directly via Server Management Studio, it will return the correct values, however the identical queries from my application report the wrong values, but only for a couple of dates. I have been over the code, and it is solid. If the error was in the code, all of the dates reported should be wrong. I have also run the code on an identical test database, and everything is reported properly. I believe the problem may lie in the sql instance itself, which is why I am posting in Server Fault. My question is, has anyone heard of a database reporting bad (incorrect) date values when queried via web application? It should be noted that this particular server was once manually rebuilt after having a cluster clean run on it.

    Read the article

  • HP DL185 - very slow disk read speed

    - by fistameeny
    Hi, I have a HP DL185 G6 Server (12 disk model) with the following spec: Quad Core Xeon 2.27GHz 6GB RAM HP P212 RAID controller with battery backup 2 x 128GB 15K SAS 3.5" (RAID-1 for the operating system) 4 x 750GB 7.5K SAS 3.5" (RAID-5 for the data, 2TB usable space) The operating system is Ubuntu Server 9.10. Both drives have been formatted as EXT4. We are finding that read speed of the RAID-5 array is poor. Disk test results below: sudo hdparm -tT /dev/cciss/c0d1p1 /dev/cciss/c0d1p1: Timing cached reads: 15284 MB in 2.00 seconds = 7650.18 MB/sec Timing buffered disk reads: 74 MB in 3.02 seconds = 24.53 MB/sec For info, the RAID-1 array performs as follows: sudo hdparm -tT /dev/cciss/c0d0p1 /dev/cciss/c0d0p1: Timing cached reads: 15652 MB in 2.00 seconds = 7834.26 MB/sec Timing buffered disk reads: 492 MB in 3.01 seconds = 163.46 MB/sec We thought this was because with no battery, read/write cache is disabled. We have bought and installed the battery backup and have used the HP bootable CD to change the cache settings to 50% read / 50% write and check cache is enabled on the drives and the controller. Is there something I'm missing?

    Read the article

  • On Mac OS X how can I monitor what is using my internet connection?

    - by Jon Hopkins
    I've got a relatively limited broadband connection (I live miles from the nearest exchange) and from time to time net access (but nothing else) slows to a near crawl. I know from a bit of monitoring software that the connection is being fairly heavily used which would explain it but I don't know what's using it. There are certainly plenty of things which might (these days there are dozens of apps that will either regularly or infrequently check data or download updates) but how can I find out? I'm happy to pay (a small amount of) money if needed, though in that case I'd rather it were a recommendation that me just Googling for something.

    Read the article

  • Best Architecture for ASP.NET WebForms Application

    - by stack man
    I have written an ASP.NET WebForms portal for a client. The project has kind of evolved rather than being properly planned and structured from the beginning. Consequently, all the code is mashed together within the same project and without any layers. The client is now happy with the functionality, so I would like to refactor the code such that I will be confident about releasing the project. As there seems to be many differing ways to design the architecture, I would like some opinions about the best approach to take. FUNCTIONALITY The portal allows administrators to configure HTML templates. Other associated "partners" will be able to display these templates by adding IFrame code to their site. Within these templates, customers can register and purchase products. An API has been implemented using WCF allowing external companies to interface with the system also. An Admin section allows Administrators to configure various functionality and view reports for each partner. The system sends out invoices and email notifications to customers. CURRENT ARCHITECTURE It is currently using EF4 to read/write to the database. The EF objects are used directly within the aspx files. This has facilitated rapid development while I have been writing the site but it is probably unacceptable to keep it like that as it is tightly coupling the db with the UI. Specific business logic has been added to partial classes of the EF objects. QUESTIONS The goal of refactoring will be to make the site scalable, easily maintainable and secure. 1) What kind of architecture would be best for this? Please describe what should be in each layer, whether I should use DTO's / POCO / Active Record pattern etc. 2) Is there a robust way to auto-generate DTO's / BOs so that any future enhancements will be simple to implement despite the extra layers? 3) Would it be beneficial to convert the project from WebForms to MVC?

    Read the article

  • How To Troubleshoot Excess Time From Connect to First Byte?

    - by Gaia
    I measured load times for a wordpress 2.9.2 install on apache 2.2.3 and I was intrigued by the long periods between connect and first byte for the css and image files. Load Average is 0.0, 0.0, 0.0 and there are 150MB free RAM on the VPS. Pingdom results are at http://imagebin.ca/img/6UaiOU.png How do I gain insight into the possible causes of this problem and how would I troubleshoot it? Thanks

    Read the article

  • Multiple columns in a single index versus multiple indexes

    - by Tim Coker
    The short version of my question is what's the difference between three indexes each indexing a single column and one index indexing three columns. Background follows. I'm primarily a programmer but have to do DBA work because we don't have a DBA. I'm evaluating our indexes versus the queries run against a particular table. The table as 3 columns that I'm often filtering against or getting the max value of. Most of the time the queries look like select max(col_a) from table where col_b = 'avalue' or select col_c from table where col_b = 'avalue' and col_a = 'anothervalue' All columns are independently indexed. My question is would I see any difference if I had an index that indexed col_b and col_a together since they can appear in a where clause together?

    Read the article

  • Should I use "Raid 5 + spare" or "Raid 6"?

    - by Trevor Boyd Smith
    What is "Raid 5 + Spare" (excerpt from User Manual, Sect 4.17.2, P.54): RAID5+Spare: RAID 5+Spare is a RAID 5 array in which one disk is used as spare to rebuild the system as soon as a disk fails (Fig. 79). At least four disks are required. If one physical disk fails, the data remains available because it is read from the parity blocks. Data from a failed disk is rebuilt onto the hot spare disk. When a failed disk is replaced, the replacement becomes the new hot spare. No data is lost in the case of a single disk failure, but if a second disk fails before the system can rebuild data to the hot spare, all data in the array will be lost. What is "Raid 6" (excerpt from User Manual, Sect 4.17.2, P.54): RAID6: In RAID 6, data is striped across all disks (minimum of four) and a two parity blocks for each data block (p and q in Fig. 80) is written on the same stripe. If one physical disk fails, the data from the failed disk can be rebuilt onto a replacement disk. This Raid mode can support up to two disk failures with no data loss. RAID 6 provides for faster rebuilding of data from a failed disk. Both "Raid 5 + spare" and "Raid 6" are SO similar ... I can't tell the difference. When would "Raid 5 + Spare" be optimal? And when would "Raid 6" be optimal"? The manual dumbs down the different raid with 5 star ratings. "Raid 5 + Spare" only gets 4 stars but "Raid 6" gets 5 stars. If I were to blindly trust the manual I would conclude that "Raid 6" is always better. Is "Raid 6" always better?

    Read the article

  • Diagnostic high load sys cpu - low io

    - by incous
    A Linux server running Ubuntu 12.04 LTS with LAMP has a strange behaviour since last week: - cpu %sys higher than before, nearly equal %usr (before that, %sys just little compare with %usr) - IO reduce by half or 1/3 compare with the week before I try to diagnostic the process/cpu by some command (top/vmstat/mpstat/sar), and see that maybe it's a bit high on interrupt timer/resched. I don't know what that means, now open to any suggestion.

    Read the article

  • Application specific environment variable settings

    - by SuperElectric
    I'm trying to work around a known bug in Ubuntu 9.10, where using the scrollbar in emacs causes text to be highlighted, and the cursor to move. This page here shows that you can fix this by setting an environment variable before launching emacs: $ GDK_NATIVE_WINDOWS=1 emacs So a lazy fix would be to alias "emacs" in my .bashrc: alias emacs="GDK_NATIVE_WINDOWS=1 emacs" This, however, has the drawback of setting this environment variable for all subsequent commands run from that shell. Is there any way to set GDK_NATIVE_WINDOWS=1 for just emacs, whenever I run emacs?

    Read the article

  • Linux server became extremely slow

    - by Ariel Aharonson
    I have a file sharing website, and my files hosted in a server with those system specifications: 32GB RAM 12x3TB 2x Intel Quad Core E5620 I have files in this server up to 4gb for each file. 446gb is full (/36TB) [root@hosted-by ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 50G 2.7G 44G 6% / tmpfs 16G 0 16G 0% /dev/shm /dev/sda1 97M 57M 36M 62% /boot /dev/mapper/VolGroup01-LogVol00 33T 494G 33T 2% /home And take a look at this: Why is the wa% so high? (I think that what makes the server to be so slow)

    Read the article

  • MongoDB: ReplicaSet slower than a corresponding Master/Slave config

    - by SecondThought
    Is it true that a mongoDB configured as a replicaset (lets say two nodes + an arbiter) will always be slower than the same DB and server specs but configured as a Master? I've run some tests and found out that for a fresh DB, RS is a little quicker than Master/Slave config but when the DB is getting bigger than ~100k records the latter is getting much snappier. am I missing something here? PS: I was testing it with mongoid driver for ruby.

    Read the article

  • Macbook Pro 2010 13,3'' 2,4 vs. 2,66Ghz

    - by Milde
    Hi, is the 13,3'' MBP 2,66ghz worth the extra 300€, comparing it to the 2,4ghz version? What CPUs are installed? P8600/P8800 ? 300€ for 70GB more space and 0,26ghz or would it be better to use the 300€ for a solid state disk? What's your opinion? Thanks in advance, Milde

    Read the article

  • Windows application shortcut icon cannot be removed

    - by Claudiu Constantin
    I recently installed an application on my Windows 7 desktop. After the install this application created a strange icon on my desktop which cannot be removed/deleted or renamed. I find this quite intrusive and I keep wondering if this is a normal/legal case... Is an application allowed to do this? I don't remember having an option to allow this "Chuck Norris" icon on my desktop. Any information on this will be highly appreciated. Edit: What this icon does is when you drag over a file it applies some "deep removal" of it. It's context menu is limited to "Open" (which does nothing) and "Create shortcut"

    Read the article

  • Why change net.inet.tcp.tcbhashsize in FreeBSD?

    - by sh-beta
    In virtually every FreeBSD network tuning document I can find: # /boot/loader.conf net.inet.tcp.tcbhashsize=4096 This is usually paired with some unhelpful statement like "TCP control-block hash table tuning" or "Set this to a reasonable value." man 4 tcp isn't much help either: tcbhashsize Size of the TCP control-block hash table (read-only). This may be tuned using the kernel option TCBHASHSIZE or by setting net.inet.tcp.tcbhashsize in the loader(8). The only document I can find that touches on this mysterious thing is the Protocol Control Block Lookup subsection beneath Transport Layer in Optimizing the FreeBSD IP and TCP Stack, but its description is more about potential bottlenecks in using it. It seems tied to matching new TCP segments to their listening sockets, but I'm not sure how. What exactly is the TCP Control Block used for? Why would you want to set its hash size to 4096 or any other particular number?

    Read the article

  • Monitoring tools that can take high rate and high volume?

    - by Jon Watte
    We're using Cacti with RRDTool to monitor and graph about 100,000 counters spread across about 1,000 Linux-based nodes. However, our current setup generally only gives us 5-minute graphs (with some data being minute-based); we often make changes where seeing feedback in "near real time" would be of value. I'd like approximately a week of 5- or 10-second data, a year of 1-minute data, and 5 years of 10-minute data. I have SSD disks and a dual-hexa-core server to spare. I tried setting up a Graphite/carbon/whisper server, and had about 15 nodes pipe to it, but it only has "average" for the retention function when promoting to older buckets. This is almost useless -- I'd like min, max, average, standard deviation, and perhaps "total sum" and "number of samples" or perhaps "95th percentile" available. The developer claims there's a new back-end "in beta" that allows you to write your own function, but this appears to still only do 1:1 retention (when saving older data, you really want the statistics calculated into many streams from a single input. Also, "in beta" seems a little risky for this installation. If I'm wrong about this assumption, I'd be happy to be shown my error! I've heard Zabbix recommended, but it puts data into MySQL or some other SQL database. 100,000 counters on a 5 second interval means 20,000 tps, and while I have an SSD, I don't have an 8-way RAID-6 with battery backup cache, which I think I'd need for that to work out :-) Again, if that's actually something that's not a problem, I'd be happy to be shown the error of my ways. Also, can Zabbix do the single data stream - promote with statistics thing? Finally, Munin claims to have a new 2.0 coming out "in beta" right now, and it boasts custom retention plans. However, again, it's that "in beta" part -- has anyone used that for real, and at scale? How did it perform, if so? I'm almost thinking about using a graphing front-end (such as Graphite) and rolling my own retention backend with a simple layer on top of mmap() and some stats. That wouldn't be particularly hard, and would probably perform very well, letting the kernel figure out the balance between frequency of flushing to disk and process operations. Any other suggestions I should look into? Note: it has to have shown itself able to sustain the kinds of data loads I'm suggesting above; if you can point at the specific implementation you're referencing, so much the better!

    Read the article

< Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >