Search Results

Search found 13341 results on 534 pages for '1 obiee performance tuning'.

Page 16/534 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Performance affects of compressing Program Files on Windows / NTFS

    - by SRobertJames
    What are the performance affects of compressing Program Files on Windows NTFS? On a fast, multicore machine, the overhead of decompression is minimal. Machines are generally disk bound, and if you can reduce the disk load by compression, you often speed things up. (Microsoft says that the built in compression of Windows Search indexes actually improves speed for this reason.) On the other hand, Windows' virtual memory is complicated. Perhaps if files are compressed, they can't be paged out simply. And there may be other issues. In short: On a fast, multicore machine with a relatively slow disk, what performance affects will compressing Program Files have?

    Read the article

  • Verify server performance

    - by George Kesler
    I'm looking for a quick and SIMPLE way to verify that new servers are performing as expected. The most important metric is disk performance, second is network performance. I’m trying to prevent problems caused by misconfiguration of RAID arrays, NIC teaming etc. The solution should work with both physical and virtual servers. I don’t need sophisticated analysis with different workloads, just one set of benchmarks which I would run against a reference server and later compare to new ones. One problem is that most benchmarks are not giving accurate results when running on a VM.

    Read the article

  • General video performance affected on Mac OSX 10.5 (PowerBook G4)

    - by r0ca
    Hi all, I'm quite new to Mac and I just got a PowerBook G4 for free. I installed OSX 10.5 on it and for the first two weeks, everything was going kinda smooth even if this is similar to a P3. I'm not expecting awsome video performance but at least be able to watch some videos from Youtube. Yesterday night, I installed Office 2008 for mac and this morning, even after a reboot, my computer is way much slower that I used to know. I watched a youtube video and the framerate was 1:1. I also noticed it on flash adds, it's way slower! Is there anything that I can do to increase video performance, see what's the process list running and taking more GPU or CPU, what's taking more ram and stuff like that?! What do you guys, Mac pros, would do on an old laptop with OSX 10.5 Thanks!

    Read the article

  • Skyrim: Heavy Performance Issues after a couple of location changes

    - by Derija
    Okay, I've tried different solutions: ENB Series, removing certain mods, checking my FPS Rate, monitoring my resources, .ini tweaks. It's all just fine, I don't see what I'm missing. A couple of days ago, I bought Skyrim. Before I bought the game, I admit I had a pirated copy because my girlfriend actually wanted to buy me the game as a present, then said she didn't have enough money. Sick of waiting, I decided to buy the game by myself. The ridiculous part is, it worked better cracked than it does now uncracked. As the title suggests, after entering and leaving houses a couple of times, my performance obviously drops extremely. My build is just fine, Intel i5 quad core processor, NVIDIA GTX 560 Ti from Gigabyte, actually stock-OC, but manually downclocked to usual settings using appropriate Gigabyte software. This fixed the CTD issues I had before with both Skyrim and BF3. I have 4GB RAM. A website about Game Tweaks suggested that my HDD may be too slow. A screenshot of a Windows Performance Index sample with the subscription "This is likely to cause issues" showed the HDD with a performance index of 5.9, the exact same mine has, so I was playing with the thought to purchase an SSD instead, load games onto it that really need it like Skyrim, and hope it'd do the trick. Unfortunately, SSDs are likewise expensive, compared to "normal" HDDs... I'm really getting desperate about it. My save is gone because the patches made it impossible to load saves of the unpatched version and I already saved more than 80 times despite being only level 8, just because every time I interact with a door leading me to another location I'm scared the game will drop again. I can't even play for 30 mins straight anymore, it's just no fun at all. I've researched for a couple of days before I decided to post my question here. Any help is appreciated, I don't want to regret having bought the game... Since it actually is the best game I've played possibly for ever. Sincerely. P.S.: I don't think it's necessary to say, but still, of course I'm playing on PC. P.P.S.: After monitoring both my PC resources including CPU usage and HDD usage as well as the GPU usage, I don't see any changes even after the said event. P.P.P.S.: Original question posted here where I've been advised to ask here.

    Read the article

  • RAID10 without write-back cache = horrible write performance?

    - by Harry Mexican
    I have just provisioned a dedicated server on singlehop. I'm running it through some tests to know what to expect performance-wise. On the I/O side (with 4 1TB disks in RAID 10) I get: write-cache disabled 200 MB/s read throughput 30 MB/s write throughput I thought that was really low compared to my desktop HD which gets 150-150 or so. So I had a chat with them and they suggested enabling the write cache. New results: write-cache enabled 280 MB/s read 260 MB/s write which is great and all but means I'd have to add a BBU for an additional monthly cost. Is it normal for the write throughput to be 1/4 of a regular drive on RAID10, if you don't have write cache? It almost feels like its intentionally bad to force you to pony up for the BBU. I'd be happy with normal non-raid performance of 150/150.

    Read the article

  • Analyzing Linux NFS server performance

    - by Kamil Kisiel
    I'd like to do some analysis of our NFS server to help track down potential bottlenecks in our applications. The server is running SUSE Enterprise Linux 10. The kind of things I'm looking to know are: Which files are being accessed by which clients Read/write throughput on a per-client basis Overhead imposed by other RPC calls Time spent waiting on other NFS requests, or disk I/O, to service a client I already know about the statistics available in /proc/net/rpc/nfsd and in fact I wrote a blog post describing them in depth. What I'm looking for is a way to dig deeper and help understand what factors are contributing to the performance seen by a particular client. I want to analyze the role the NFS server plays in the performance of an application on our cluster so that I can think of ways to best optimize it.

    Read the article

  • performance block countries using iptables /netfilter

    - by markus
    It's easy to block IPs from country using iptables (e.g. like http://www.cyberciti.biz/faq/block-entier-country-using-iptables/). However I read that the performance can go down if the deny list get too large. An alternative is installing the iptables geoip patch or using ipset ( http://www.jsimmons.co.uk/2010/06/08/using-ipset-with-iptables-in-ubuntu-lts-1004-to-block-large-ip-ranges/) instead of iptables. Does anyone have experience with the various approaches and can say something about the performance differences ? Are there are other ways to block country IPs in linux which I did't mentioned above?

    Read the article

  • Determining Performance Limits

    - by JeffV
    I have a number of windows processes that pass messages between them hat a high rate using tcp to local host. Aside from testing on actual hardware how can I assess what my hardware limit will be. These applications are not doing CPU intensive work, mostly decomposing and combining messages, scanning over them for special flag in the data etc.. The message size is typically 3k and the rate is typically ~10k messages per second. ~30MB per second between processing stages. There may be 10 or more stages depending. For this type of application, what should I look to for assessing performance? What do I look for in a server performance wise? I am currently running an XEON L5408 with 32 GB ram. But I am assuming cache is more important than actual ram size as I am barely touching the ram.

    Read the article

  • Does shutdown idle VMs improve the performance?

    - by Samselvaprabu
    Often our team members are coming to me with a compliant that their VMs are slow. Our team members suggested to shutdown some of the VMs temporarily and try to access the VM. But most cases that would not help. Assume that i have assigned 4 GB for and 2 CPUs for my VM. So ideally it should not face performance issue. As our ESXi 4.1 server has multiple VM in the same server (we have overcommited memory and CPU). Does shut down other VM really helps to improve performance or not? [Note : We are using ESXi 4.1 and our hardware is R710 server. We have more number of VMs in single server so we have overcommited memory.]

    Read the article

  • Strange performance from RAID5 using WD RE4 disks

    - by Howard
    I've noticed a bit of a performance issue with some WD RE4 drives I'm using under AMD's hardware RAID solution. First a bit of background: Environment: Windows 7 home premium x64 HDD's: 3x 1TB WD Raid Edition 4 in a RAID 5 setup with 128 kbyte stripe (2TB usable space) Testing Tool: HD Tune, process set to "High Priority" Processor: AMD Phenom II x6 1100T Ram: 16GB DDR3/1600mhz Motherboard: MSI 970A-G45 The image below pretty much depicts the issue I'm having. Every test has the same thing, a period of similar length where the performance drops to a few megabytes a second. This can't be a TLER issue as the purpose of RE4's is to work around that. Any help would be greatly appreciated.

    Read the article

  • Exchange 2010 SP2 OWA performance

    - by Frederik Nielsen
    How do I increase performance in OWA 2010 SP2? I am running CAS on a seperate installation, which has 8GB RAM and 4 CPU cores - running virtualized in a vmware environment. However, the load times are pretty bad, so is there any way to improve those? I am thinking of installing a linux cache-stuff-server in front of the OWA, but will that work? And how should it be done? Allright, I "fixed" it - was just something temporary issue. Thanks for your replies

    Read the article

  • Oracle Business Intelligence Advanced - Hands-on Workshop para Parceiros - 18 a 21 de Janeiro

    - by Claudia Costa
    Workshop Description This FREE hands-on workshop highlights strengths of OBIEE 11g by providing attendees a hands-on experience with BI 11g product. OBIEE 11g has adopted the standardized infrastructure of Fusion Middleware to provide robust server capability along with highly anticipated advanced visualization components like Maps, Flash based charts, Scorecards and KPIs. This workshop focuses on new features and infrastructure components for the BI practitioners who are familiar with either OBIEE 10g or previous BI releases. After taking this course, Oracle Business Intelligence 11g Advanced, you will gain insight into OBIEE11g technology, reporting solutions and new features. Workshop provides opportunities to practice with OBIEE11g environment as hands on activities. Participant will gain in-depth understanding of new architecture of OBIEE 11g, security mode, installation/configuration as well as reporting aspects like, new ROLAP/MOLAP style hierarchical browsing, new chart types, Action Framework and Advanced Visualization. If you are a Business Intelligence practitioners and familiar with BI10g - you cannot afford to miss this 3-day workshop. Register Now! PresentationsBusiness Intelligence EE (OBIEE) 11g: Advanced Workshop ·         OBIEE 11g Overview ·         OBIEE 11g Architecture and Infrastructure ·         OBIEE 11g Installation, Configuration and Monitoring ·         OBIEE11g Security Model and BI Components ·         OBIEE 11g Homepage Overview ·         New Visualizations: Master-Detail Events, Charts, Hierarchies ·         Reports Building with OBIEE 11g and Catalog Management ·         Spatial Integration, Action Framework, Scorecards ·         OBIEE 11g Dashboards ·         OBIEE Integration Options  Lab OutlineOracle Business Intelligence (OBIEE) 11g: Advanced Workshop The labs enable OBIEE Core functionality through hands-on activities are based on a Oracle VirtualBox image with software and training samples pre-installed. This Advanced course has few labs optional during the workshop to allow for students to practice them on their own. The primary purpose of the workshop is to provide expertise of 11g features and infrastructure changes from 10g. Labs will allow you to explore concepts to: ·         Have a clear understanding of the OBIEE 11g architecture ·         Have a clear understanding of the OBIEE differentiators ·         OBIEE11g Security Model ·         OBIEE11g Environment Management ·         Report Building with OBIEE11g ·         OBIEE11g Dashboard and Homepage Environment ·         New Visualization features ·         Management of Reports, Dashboards and BI Catalog Objects Audience ·         Business Intelligence Evangelist ·         Business Intelligence Application Developer or Consultant ·         Data Warehouse Developer ·         Enterprise Architects ·         Industry Solutions Architects Prerequisites ·         Experience and Understanding of OBIEE 10g is required. ·         Good understanding of data modeling for reporting purpose ·         Strong experience with database technologies preferred Equipment RequirementsThis workshop requires attendees to provide their own laptops. Attendee laptops must meet the following minimum hardware/software requirements: OBIEE 11g environments requires at least 3 GB of RAM (4GB Preferred), without which student will not be able to complete labs. This workshop has environment that includes VM Image and also a software components that students will install on their laptop for the labs. ·         Minimum 3GB RAM. 25GB free disk space ·         Internet Explorer 7 ·         VirtualBox (the latest version) ·         Downloadable from http://www.virtualbox.org ·         WINRAR or 7zip ·         Downloadable from http://www.win-rar.com/download.html ·         Downloadable from http://www.7zip.com/ Attendees will be given a VirtualBox image for Oraclee BI 11g Workshop containing the software along with required toolset, database and data sets for the labs. AgendaThis class duration is 3 Days9:00am: Sign-in and Technical Set up9:30am : Workshop Starts5:00pm : Workhop Ends LocalHotel Holiday Inn Express - Porto Salvo - Lisboa This class is Free. Register early to confirm a seat! Oracle BI Advanced 11g Hands-on Workshop - Schedule Register Now! January 11-13, 2011: Kista, Sweden January 18-20, 2011: Lisbon, Portugal March 1-3, 2011: Reading, Berkshire, UK March 15-17, 2011: Colombes, Paris, France March 29-31, 2011: Amsterdam, Netherlands Questions? For registration questions please send an email to [email protected]. Para outras informações, por favor contacte Claudia Costa, telf: 214235027 ou pelo email   

    Read the article

  • Why does Chrome video performance substantially degrade after waking from suspend in 10.10?

    - by Grant Heaslip
    Note: For some more details, some of which may not be true given what I've figured out, see this post. When I first boot my computer, video performance (both native H.264 HTML5 in YouTube and Vimeo, and in Flash) in Chrome is perfectly reasonable. CPU usage stays slow, everything works correctly, and the video is silky-smooth. But for whatever reason, if I suspend my computer then wake it up, video performance plummets. Full screen HTML5 video is choppy at best, and full-screen Flash video basically brings my computer to its knees (I'm talking less than a frame a second, and a 5 second lead time to leave full-screen after hitting Esc). Restarting Chrome doesn't fix this — I need to completely restart my machine before performance goes back to normal. Video performance in other applications, such as Movie Player, doesn't seem to be affected at all by the suspend cycle — it's only Chrome. I'm using a Lenovo X201, with an Intel GMA HD graphics chipset, and Intel compnents all around (I don't need any proprietary drivers). This didn't happen in 10.04, and I haven't anything that I think would have caused this to happen. It's possible that a Chrome release could have caused this, but it seems less likely than a regression between 10.04 and 10.10. Any ideas? EDIT: In response Georg's comment, logging in and out doesn't fix it. Restarting Compiz or switching to Metacity (at least by using "compiz/metacity --replace & disown" — am I doing it right?) doesn't help (actually, it seemed to help somewhat with Flash once, but I haven't been able to reproduce this). I'm not sure about GDM — when I use "sudo restart gdm" I get kicked back to the Linux shell (?), which I have no idea how to get out of. Also, I want to make very clear that this isn't just a case of Flash sucking (it does,but that's beside the point). I"m seeing the same general problem with HTML5 videos, and Flash is performing better on my Nexus One than it does on my Core i5 laptop. There's something screwy going on with Chrome and/or 10.10.

    Read the article

  • Performance Tuning with Traces

    - by Tara Kizer
    This past Saturday, I presented "Performance Tuning with Traces" at SQL Saturday #47 in Phoenix, Arizona.  You can download my slide deck and supporting files here. This is the same presentation that I did in September at SQL Saturday #55 in San Diego, however I focused less on my custom server-side trace tool and more on the steps that I take to troubleshoot a production performance problem which often includes server-side tracing.  If any of my blog readers attended the presentation, I'd love to hear your feedback.  I'm specifically interested in hearing constructive criticism.  Speaking in front of people is not something that comes naturally to me.  I plan on presenting in the future, so feedback on how I can do a better job would be very helpful.  My number one problem is I talk too fast!

    Read the article

  • JavaOne 2012 LAD Session: The Future of JVM Performance Tuning

    - by Ricardo Ferreira
    Hi folks. This year, together with the Oracle Open World Latin America, happened another edition of the JavaOne Latin America, the more important event of Java for the developers community. I would like to share with you the slides that I've used in my session. The session was "The Future of JVM Performance Tuning" and the idea was to share some knowledge about JVM enhancements that Oracle implemented in Hotspot about performance, specially those ones related with GC ("Garbage Collection") and SDP ("Sockets Direct Protocol"). I hope you enjoy the content :)

    Read the article

  • performance wise htaccess

    - by purpler
    hese's the my htaccess template, i wonder if anything could be added to increase website performance.. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US ServerSignature Off FileETag None Header unset ETag Options -MultiViews #Options All -Indexes # Force the latest IE version or ChromeFrame <IfModule mod_setenvif.c> <IfModule mod_headers.c> BrowserMatch MSIE ie Header set X-UA-Compatible "IE=Edge,chrome=1" env=ie </IfModule> </IfModule> # Proxy X-UA Setup <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> #Rewrites Options +FollowSymlinks RewriteEngine On RewriteBase / # Redirect to non-WWW RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] # Redirect to WWW RewriteCond %{HTTP_HOST} ^domain.com RewriteRule (.*) http://www.domain.com/$1 [R=301,L] # Redirect index to root RewriteRule ^(.*)index\.(php|html)$ /$1 [R=301,L] # Caching ExpiresActive On ExpiresDefault A0 Header set Cache-Control "public" # 1 Year Long Cache <FilesMatch "\.(flv|fla|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav|png|jpg|jpeg|gif|swf|js|css|ttf|eot|woff|svg|svgz)$"> ExpiresDefault A31622400 </FilesMatch> # Proxy Caching <FilesMatch "\.(css|js|png)$"> ExpiresDefault A31622400 Header set Cache-Control "private" </FilesMatch> # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Proper SVG serving AddType image/svg+xml svg svgz AddEncoding gzip svgz # GZip Compression <IfModule mod_deflate.c> <FilesMatch "\.(php|html|css|js|xml|txt|ttf|otf|eot|svg)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error page ErrorDocument 404 /404.html # Deny access to sensitive files <FilesMatch "\.(htaccess|ini|log|psd)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • WebLogic Server Performance and Tuning: Part II - Thread Management

    - by Gokhan Gungor
    WebLogic Server, like any other java application server, provides resources so that your applications use them to provide services. Unfortunately none of these resources are unlimited and they must be managed carefully. One of these resources is threads which are pooled to provide better throughput and performance along with the fast response time and to avoid deadlocks. Threads are execution points that WebLogic Server delivers its power and execute work. Managing threads is very important because it may affect the overall performance of the entire system. In previous releases of WebLogic Server 9.0 we had multiple execute queues and user defined thread pools. There were different queues for different type of work which had fixed number of execute threads.  Tuning of this thread pools and finding the proper number of threads was time consuming which required many trials. WebLogic Server 9.0 and the following releases use a single thread pool and a single priority-based execute queue. All type of work is executed in this single thread pool. Its size (thread count) is automatically decreased or increased (self-tuned). The new “self-tuning” system simplifies getting the proper number of threads and utilizing them.Work manager allows your applications to run concurrently in multiple threads. Work manager is a mechanism that allows you to manage and utilize threads and create rules/guidelines to follow when assigning requests to threads. We can set a scheduling guideline or priority a request with a work manager and then associate this work manager with one or more applications. At run-time, WebLogic Server uses these guidelines to assign pending work/requests to execution threads. The position of a request in the execute queue is determined by its priority. There is a default work manager that is provided. The default work manager should be sufficient for most applications. However there can be cases you want to change this default configuration. Your application(s) may be providing services that need mixture of fast response time and long running processes like batch updates. However wrong configuration of work managers can lead a performance penalty while expecting improvement.We can define/configure work managers at;•    Domain Level: config.xml•    Application Level: weblogic-application.xml •    Component Level: weblogic-ejb-jar.xml or weblogic.xml(For a specific web application use weblogic.xml)We can use the following predefined rules/constraints to manage the work;•    Fair Share Request Class: Specifies the average thread-use time required to process requests. The default is 50.•    Response Time Request Class: Specifies a response time goal in milliseconds.•    Context Request Class: Assigns request classes to requests based on context information.•    Min Threads Constraint: Limits the number of concurrent threads executing requests.•    Max Threads Constraint: Guarantees the number of threads the server will allocate to requests.•    Capacity Constraint: Causes the server to reject requests only when it has reached its capacity. Let’s create a work manager for our application for a long running work.Go to WebLogic console and select Environment | Work Managers from the domain structure tree. Click New button and select Work manager and click next. Enter the name for the work manager and click next. Then select the managed server instances(s) or clusters from available targets (the one that your long running application is deployed) and finish. Click on MyWorkManager, and open the Configuration tab and check Ignore Stuck Threads and save. This will prevent WebLogic to tread long running processes (that is taking more than a specified time) as stuck and enable to finish the process.

    Read the article

  • Improving terminal server performance for a specfic app

    - by Matt
    We have a windows 2003 terminal server running 2X application load balancign that is hosting a client's application that is accessed by around 50 users. Each user has there own database. The database is a file based database. The application is developed under Delphi so I think the database may be BDE based. As you can imagine, there is probably quite a lot of disk i/o. Here are some of the perfmon settings. Logged in users (average) 20 - 25 CPU Utilization (average) 80 - 100% Disk Queue Length (average) 1.6 % Disk time (average) 111 Page faults/sec (average) 1400 The application takes on average about a minute to load up. As usual, the budget is tight. Is there basic windows performance tuning tips that people can recommend to improve things before we fork out on more RAM etc. Server is a 2.8GHz Xeon with 3GB of RAM.

    Read the article

  • Getting Optimal Performance from Oracle E-Business Suite

    - by Steven Chan (Oracle Development)
    Performance tuning and optimization in E-Business Suite environments can involve many different components and diagnostic tools.  Samer Barakat, Senior Architect in our Applications Performance group, held an OpenWorld 2013 session that covered: Performance triage, analysis and diagnostic tools Optimizing the E-Business Suite application tier, including Concurrent Manager Optimizing the E-Business Suite database tier Optimizing the E-Business Suite on Real Application Clusters (RAC) E-Business Suite on engineered systems, including Exadata and Exalogic Optimizing E-Business Suite data management, including archiving and purging  The Applications Performance group works with the world's largest E-Business Suite customers to isolate and resolve performance bottlenecks. This team has helped tune the E-Business Suite environments of world's largest companies to handle staggering amounts of transactional volume in multi-terabyte databases.  This group also publishes our official Oracle Apps benchmarks, white papers, and performance metrics. This is an essential set of tips and techniques that all EBS sysadmins and DBAs can use to improve the performance of their environments: Getting Optimal Performance from Oracle E-Business Suite (PDF, 1.7 MB) OpenWorld 2013 presentations are only available for approximately six months -- until ~March 2013.  Download this one while it's still available. Related Articles E-Business Suite Technology Sessions at OpenWorld 2013 OAUG/Collaborate Recap: Best Practices for E-Business Suite Performance Tuning

    Read the article

  • Bad disk performance on HP DL360 with Smarty Array P400i RAID controller

    - by sarge
    I have a HP DL360 server with 4x 146GB SAS disks and a Smart Array P400i RAID controller with 256MB cache. The disks are in RAID 5 (3 disks + 1 hot spare). The server is running VMware ESX 3i. The disk write performance is really bad. Here are some numbers: ns1:~# hdparm -tT /dev/sda /dev/sda: Timing cached reads: 3364 MB in 2.00 seconds = 1685.69 MB/sec Timing buffered disk reads: 18 MB in 3.79 seconds = 4.75 MB/sec ns1:~# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=125000 && sync" 125000+0 records in 125000+0 records out 1024000000 bytes (1.0 GB) copied, 282.307 s, 3.6 MB/s real 4m52.003s user 0m2.160s sys 3m10.796s Compared to another server those number are terrible: Dell R200, 2x 500GB SATA disks, PERC raid controller (disks are mirrored). web4:~# hdparm -tT /dev/sda /dev/sda: Timing cached reads: 6584 MB in 2.00 seconds = 3297.79 MB/sec Timing buffered disk reads: 316 MB in 3.02 seconds = 104.79 MB/sec web4:~# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=125000 && sync" 125000+0 records in 125000+0 records out 1024000000 bytes (1.0 GB) copied, 35.2919 s, 29.0 MB/s real 0m36.570s user 0m0.476s sys 0m32.298s The server isn't very loaded and the VMware Infrastructure Client performance monitor is showing 550KBps average read and 1208KBps average write for the last 30 minutes (highest write rate: 6.6MBps). This has been a problem from the start. Any ideas?

    Read the article

  • Performance variation

    - by Ree
    During my time spent working with multiple machines, I have noticed that performance of the same machine doing the same tasks in the same order differs and sometimes the difference is big enough to be noticeable. This applies to all the machines I've owned and/or maintained (old and modern). Some examples (many of them you may have noticed yourself) that sometimes are completed in different time frames: POST OS installation Hardware tests and operations (usually executed within a customized OS such as one of the many DOS variants), HDD tests and "low level" formats Software installation or other tasks (such as benchmarks) within a general purpose OS (Windows, Linux, etc) I can imagine this is caused by the fact that a machine is built with many components having to communicate as a whole and since the mechanical and electronic parts aren't perfect the overhead occurs. In the last example, I assume the OS complexity and concurrently running multiple processes has some additional effect as well. However, I'm wondering if this hardware imperfection and overhead is indeed that high to be humanly noticeable? Maybe there are other factors that are influencial as much or even more? So, in short - why? To emphasize: the difference is noticeable on the same machine performing the same tasks and this applies to ANY machine in my experience. I'm not comparing machine to machine performance.

    Read the article

  • Improving Performance of RDP Over LAN

    - by Jared Brown
    Architecture: A deployment of 6 new HP thin clients (Windows XP Embedded) with TCP/IP access to several new HP servers (Windows 2003 Server). Each thin client is connected over fiber optic to a Gigabit Cisco switch, which the servers are connected to. There are 10/100 Ethernet to fiber converter boxes on each end of the fiber cables. Problem: Noticeable lag over RDP while using the Unigraphics CAD package. 3D models take .5 to 1 second to respond to mouse actions. Other Details: Network throughput on each thin client's RDP session is 7288 kbps. RDP connection settings - color setting: 15k, all themes, etc. turned off. Local and remote system performance stats are well within norms (CPU, Memory, and Network). Question: Are there newer versions of terminal services or RDP I can use on my existing OSes? Are there compression algorithms, etc. that are well suited for a high-bandwidth LAN? Are there valid alternatives that will yield higher performance (i.e. UltraVNC with drivers installed)? Are there TCP/IP tuning options I can exploit?

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >