Search Results

Search found 13608 results on 545 pages for 'performance dashboard'.

Page 155/545 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • Computer Networks UNISA - Chap 15 &ndash; Network Management

    - by MarkPearl
    After reading this section you should be able to Understand network management and the importance of documentation, baseline measurements, policies, and regulations to assess and maintain a network’s health. Manage a network’s performance using SNMP-based network management software, system and event logs, and traffic-shaping techniques Identify the reasons for and elements of an asset managements system Plan and follow regular hardware and software maintenance routines Fundamentals of Network Management Network management refers to the assessment, monitoring, and maintenance of all aspects of a network including checking for hardware faults, ensuring high QoS, maintaining records of network assets, etc. Scope of network management differs depending on the size and requirements of the network. All sub topics of network management share the goals of enhancing the efficiency and performance while preventing costly downtime or loss. Documentation The way documentation is stored may vary, but to adequately manage a network one should at least record the following… Physical topology (types of LAN and WAN topologies – ring, star, hybrid) Access method (does it use Ethernet 802.3, token ring, etc.) Protocols Devices (Switches, routers, etc) Operating Systems Applications Configurations (What version of operating system and config files for serve / client software) Baseline Measurements A baseline is a report of the network’s current state of operation. Baseline measurements might include the utilization rate for your network backbone, number of users logged on per day, etc. Baseline measurements allow you to compare future performance increases or decreases caused by network changes or events with past network performance. Obtaining baseline measurements is the only way to know for certain whether a pattern of usage has changed, or whether a network upgrade has made a difference. There are various tools available for measuring baseline performance on a network. Policies, Procedures, and Regulations Following rules helps limit chaos, confusion, and possibly downtime. The following policies and procedures and regulations make for sound network management. Media installations and management (includes designing physical layout of cable, etc.) Network addressing policies (includes choosing and applying a an addressing scheme) Resource sharing and naming conventions (includes rules for logon ID’s) Security related policies Troubleshooting procedures Backup and disaster recovery procedures In addition to internal policies, a network manager must consider external regulatory rules. Fault and Performance Management After documenting every aspect of your network and following policies and best practices, you are ready to asses you networks status on an on going basis. This process includes both performance management and fault management. Network Management Software To accomplish both fault and performance management, organizations often use enterprise-wide network management software. There various software packages that do this, each collect data from multiple networked devices at regular intervals, in a process called polling. Each managed device runs a network management agent. So as not to affect the performance of a device while collecting information, agents do not demand significant processing resources. The definition of a managed devices and their data are collected in a MIB (Management Information Base). Agents communicate information about managed devices via any of several application layer protocols. On modern networks most agents use SNMP which is part of the TCP/IP suite and typically runs over UDP on port 161. Because of the flexibility and sophisticated network management applications are a challenge to configure and fine-tune. One needs to be careful to only collect relevant information and not cause performance issues (i.e. pinging a device every 5 seconds can be a problem with thousands of devices). MRTG (Multi Router Traffic Grapher) is a simple command line utility that uses SNMP to poll devices and collects data in a log file. MRTG can be used with Windows, UNIX and Linux. System and Event Logs Virtually every condition recognized by an operating system can be recorded. This is typically done using event logs. In Windows there is a GUI event log viewer. Similar information is recorded in UNIX and Linux in a system log. Much of the information collected in event logs and syslog files does not point to a problem, even if it is marked with a warning so it is important to filter your logs appropriately to reduce the noise. Traffic Shaping When a network must handle high volumes of network traffic, users benefit from performance management technique called traffic shaping. Traffic shaping involves manipulating certain characteristics of packets, data streams, or connections to manage the type and amount of traffic traversing a network or interface at any moment. Its goals are to assure timely delivery of the most important traffic while offering the best possible performance for all users. Several types of traffic prioritization exist including prioritizing traffic according to any of the following characteristics… Protocol IP address User group DiffServr VLAN tag in a Data Link layer frame Service or application Caching In addition to traffic shaping, a network or host might use caching to improve performance. Caching is the local storage of frequently needed files that would otherwise be obtained from an external source. By keeping files close to the requester, caching allows the user to access those files quickly. The most common type of caching is Web caching, in which Web pages are stored locally. To an ISP, caching is much more than just convenience. It prevents a significant volume of WAN traffic, thus improving performance and saving money. Asset Management Another key component in managing networks is identifying and tracking its hardware. This is called asset management. The first step to asset management is to take an inventory of each node on the network. You will also want to keep records of every piece of software purchased by your organization. Asset management simplifies maintaining and upgrading the network chiefly because you know what the system includes. In addition, asset management provides network administrators with information about the costs and benefits of certain types of hardware or software. Change Management Networks are always in a stage of flux with various aspects including… Software changes and patches Client Upgrades Shared Application Upgrades NOS Upgrades Hardware and Physical Plant Changes Cabling Upgrades Backbone Upgrades For a detailed explanation on each of these read the textbook (Page 750 – 761)

    Read the article

  • Optimizing MySql query to avoid using "Using filesort"

    - by usef_ksa
    I need your help to optimize the query to avoid using "Using filesort".The job of the query is to select all the articles that belongs to specific tag. The query is: "select title from tag,article where tag='Riyad' AND tag.article_id=article.id order by tag.article_id". the tables structure are the following: Tag table CREATE TABLE `tag` ( `tag` VARCHAR( 30 ) NOT NULL , `article_id` INT NOT NULL , INDEX ( `tag` ) ) ENGINE = MYISAM ; Article table CREATE TABLE `article` ( `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY , `title` VARCHAR( 60 ) NOT NULL ) ENGINE = MYISAM Sample data INSERT INTO `article` VALUES (1, 'About Riyad'); INSERT INTO `article` VALUES (2, 'About Newyork'); INSERT INTO `article` VALUES (3, 'About Paris'); INSERT INTO `article` VALUES (4, 'About London'); INSERT INTO `tag` VALUES ('Riyad', 1); INSERT INTO `tag` VALUES ('Saudia', 1); INSERT INTO `tag` VALUES ('Newyork', 2); INSERT INTO `tag` VALUES ('USA', 2); INSERT INTO `tag` VALUES ('Paris', 3); INSERT INTO `tag` VALUES ('France', 3);

    Read the article

  • Simple Python Challenge: Fastest Bitwise XOR on Data Buffers

    - by user213060
    Challenge: Perform a bitwise XOR on two equal sized buffers. The buffers will be required to be the python str type since this is traditionally the type for data buffers in python. Return the resultant value as a str. Do this as fast as possible. The inputs are two 1 megabyte (2**20 byte) strings. The challenge is to substantially beat my inefficient algorithm using python or existing third party python modules (relaxed rules: or create your own module.) Marginal increases are useless. from os import urandom from numpy import frombuffer,bitwise_xor,byte def slow_xor(aa,bb): a=frombuffer(aa,dtype=byte) b=frombuffer(bb,dtype=byte) c=bitwise_xor(a,b) r=c.tostring() return r aa=urandom(2**20) bb=urandom(2**20) def test_it(): for x in xrange(1000): slow_xor(aa,bb)

    Read the article

  • Advantage of using a static member function instead of an equivalent non-static member function?

    - by jonathanasdf
    I was wondering whether there's any advantages to using a static member function when there is a non-static equivalent. Will it result in faster execution (because of not having to care about all of the member variables), or maybe less use of memory (because of not being included in all instances)? Basically, the function I'm looking at is an utility function to rotate an integer array representing pixel colours an arbitrary number of degrees around an arbitrary centre point. It is placed in my abstract Bullet base class, since only the bullets will be using it and I didn't want the overhead of calling it in some utility class. It's a bit too long and used in every single derived bullet class, making it probably not a good idea to inline. How would you suggest I define this function? As a static member function of Bullet, of a non-static member function of Bullet, or maybe not as a member of Bullet but defined outside of the class in Bullet.h? What are the advantages and disadvantages of each?

    Read the article

  • How to recognize what indexes are not used?

    - by tomaszs
    I have a table in MySQL with 7 indexes, most of them are on more than one column. I think here is too much indexes. Is there any way to get statistics of what indexes are used more by all thousands of queries to this database and what are less worthy so I know what index to consider to remove in first place?

    Read the article

  • SQL Server lock/hang issue

    - by mattwoberts
    Hi, I'm using SQL Server 2008 on Windows Server 2008 R2, all sp'd up. I'm getting occasional issues with SQL Server hanging with the CPU usage on 100% on our live server. It seems all the wait time on SQL Sever when this happens is given to SOS_SCHEDULER_YIELD. Here is the Stored Proc that causes the hang. I've added the "WITH (NOLOCK)" in an attempt to fix what seems to be a locking issue. ALTER PROCEDURE [dbo].[MostPopularRead] AS BEGIN SET NOCOUNT ON; SELECT c.ForeignId , ct.ContentSource as ContentSource , sum(ch.HitCount * hw.Weight) as Popularity , (sum(ch.HitCount * hw.Weight) * 100) / @Total as Percent , @Total as TotalHits from ContentHit ch WITH (NOLOCK) join [Content] c WITH (NOLOCK) on ch.ContentId = c.ContentId join HitWeight hw WITH (NOLOCK) on ch.HitWeightId = hw.HitWeightId join ContentType ct WITH (NOLOCK) on c.ContentTypeId = ct.ContentTypeId where ch.CreatedDate between @Then and @Now group by c.ForeignId , ct.ContentSource order by sum(ch.HitCount * hw.HitWeightMultiplier) desc END The stored proc reads from the table "ContentHit", which is a table that tracks when content on the site is clicked (it gets hit quite frequently - anything from 4 to 20 hits a minute). So its pretty clear that this table is the source of the problem. There is a stored proc that is called to add hit tracks to the ContentHit table, its pretty trivial, it just builds up a string from the params passed in, which involves a few selects from some lookup tables, followed by the main insert: BEGIN TRAN insert into [ContentHit] (ContentId, HitCount, HitWeightId, ContentHitComment) values (@ContentId, isnull(@HitCount,1), isnull(@HitWeightId,1), @ContentHitComment) COMMIT TRAN The ContentHit table has a clustered index on its ID column, and I've added another index on CreatedDate since that is used in the select. When I profile the issue, I see the Stored proc executes for exactly 30 seconds, then the SQL timeout exception occurs. If it makes a difference the web application using it is ASP.NET, and I'm using Subsonic (3) to execute these stored procs. Can someone please advise how best I can solve this problem? I don't care about reading dirty data... Thanks

    Read the article

  • Rationale behind Python's preferred for syntax

    - by susmits
    What is the rationale behind the advocated use of the for i in xrange(...)-style looping constructs in Python? For simple integer looping, the difference in overheads is substantial. I conducted a simple test using two pieces of code: File idiomatic.py: #!/usr/bin/env python M = 10000 N = 10000 if __name__ == "__main__": x, y = 0, 0 for x in xrange(N): for y in xrange(M): pass File cstyle.py: #!/usr/bin/env python M = 10000 N = 10000 if __name__ == "__main__": x, y = 0, 0 while x < N: while y < M: y += 1 x += 1 Profiling results were as follows: bash-3.1$ time python cstyle.py real 0m0.109s user 0m0.015s sys 0m0.000s bash-3.1$ time python idiomatic.py real 0m4.492s user 0m0.000s sys 0m0.031s I can understand why the Pythonic version is slower -- I imagine it has a lot to do with calling xrange N times, perhaps this could be eliminated if there was a way to rewind a generator. However, with this deal of difference in execution time, why would one prefer to use the Pythonic version?

    Read the article

  • Showing response time in a rails app.

    - by anshul
    I want to display a This page took x seconds widget at the bottom of every page in my rails application. I would like x to reflect the approximate amount of time the request spent on my server. What is the usual way this is done in Rails?

    Read the article

  • SQL Server Mutliple Joins Taxing CPU

    - by durilai
    I have a stored procedure on SQL server 2005. It is pulling from a Table function, and has two joins. When the query is run using a load test it kills the CPU 100% across all 16 cores! I have determined that removing one of the joins makes the query run fine, but both taxes the CPU. Select SKey From dbo.tfnGetLatest(@ID) a left join [STAGING].dbo.RefSrvc b on a.LID = b.ESIID left join [STAGING].dbo.RefSrvc c on a.EID = c.ESIID Any help is appreciated, note the join is happening on the same table in a different database on the same server.

    Read the article

  • Slow retrieval of data in SQLITE takes a long using ContentProvider

    - by Arlyn
    I have an application in Android (running 4.0.3) that stores a lot of data in Table A. Table A resides in SQLite Database. I am using a ContentProvider as an abstraction layer above the database. Lots of data here means almost 80,000 records per month. Table A is structured like this: String SQL_CREATE_TABLE = "CREATE TABLE IF NOT EXISTS " + TABLE_A + " ( " + COLUMN_ID + " INTEGER PRIMARY KEY NOT NULL" + "," + COLUMN_GROUPNO + " INTEGER NOT NULL DEFAULT(0)" + "," + COLUMN_TIMESTAMP + " DATETIME UNIQUE NOT NULL" + "," + COLUMN_TAG + " TEXT" + "," + COLUMN_VALUE + " REAL NOT NULL" + "," + COLUMN_DEVICEID + " TEXT NOT NULL" + "," + COLUMN_NEW + " NUMERIC NOT NULL DEFAULT(1)" + " )"; Here is the index statement: String SQL_CREATE_INDEX_TIMESTAMP = "CREATE INDEX IF NOT EXISTS " + TABLE_A + "_" + COLUMN_TIMESTAMP + " ON " + TABLE_A + " (" + COLUMN_TIMESTAMP + ") "; I have defined the columns as well as the table name as String Constants. I am already experiencing significant slow down when retrieving this data from Table A. The problem is that when I retrieve data from this table, I first put it in an ArrayList and then I display it. Obviously, this is possibly the wrong way of doing things. I am trying to find a better way to approach this problem using a ContentProvider. But this is not the problem that bothers me. The problem is for some reason, it takes a lot longer to retrieve data from other tables which have only upto 12 records maximum. I see this delay increase as the number of records in Table A increase. This does not make any sense. I can understand the delay if I retrieve data from Table A, but why the delay in retrieving data from other tables. To clarify, I do not experience this delay if Table A is empty or has less than 3000 records. What could be the problem?

    Read the article

  • Faster integer division when denominator is known?

    - by aaa
    hi I am working on GPU device which has very high division integer latency, several hundred cycles. I am looking to optimize divisions. All divisions by denominator which is in a set { 1,3,6,10 }, however numerator is a runtime positive value, roughly 32000 or less. due to memory constraints, lookup table is not option. Can you think of alternatives? I have thought of computing float point inverses, and using those to multiply numerator. Thanks

    Read the article

  • Web Page execution

    - by Sweta Jha
    I have a web page which brings 13K+ records in 20 seconds. There is a menu on the page, clicking on which navigates me to another page which is very lightweight. Displaying the data (13K+) took only 20 seconds whereas navigating from that page took much longer, more than 2 mins. Can you tell me why is the latter taking so much of time. I've stopped the page_load code execution on click of the menu. I've disabled the viewstate for that page as well.

    Read the article

  • SQL Overlapping and Multi-Column Indexes

    - by durilai
    I am attempting to tune some stored procedures and have a question on indexes. I have used the tuning advisor and they recommended two indexes, both for the same table. The issue is one index is for one column and the other is for multiple columns, of which it includes the same column from the first. My question is why and what is the difference? CREATE NONCLUSTERED INDEX [_dta_index_Table1_5_2079723603__K23_K17_K13_K12_K2_K10_K22_K14_K19_K20_K9_K11_5_6_7_15_18] ON [dbo].[Table1] ( [EfctvEndDate] ASC, [StuLangCodeKey] ASC, [StuBirCntryCodeKey] ASC, [StuBirStOrProvncCodeKey] ASC, [StuKey] ASC, [GndrCodeKey] ASC, [EfctvStartDate] ASC, [StuHspncEnctyIndctr] ASC, [StuEnctyMsngIndctr] ASC, [StuRaceMsngIndctr] ASC, [StuBirDate] ASC, [StuBirCityName] ASC ) INCLUDE ( [StuFstNameLgl], [StuLastOrSrnmLgl], [StuMdlNameLgl], [StuIneligSnorImgrntIndctr], [StuExpctdGrdtngClYear] ) WITH (SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF) ON [PRIMARY] go CREATE NONCLUSTERED INDEX [_dta_index_Table1_5_2079723603__K23] ON [dbo].[Table1] ( [EfctvEndDate] ASC )WITH (SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF) ON [PRIMARY]

    Read the article

  • Will more CPUs/cores help with VS.NET build times?

    - by LoveMeSomeCode
    I was wondering if anyone knew whether Visual Studio .NET had a parallel build process or not? I have a solution with lots of projects, every project has lots of markup/code, lots of types, etc. Just sitting there with intellisense on runs it up to about 700MB. But the build times are really slow and only seem to max out one of my two cpu cores. Does this mean the build process is single threaded? My solution's build dependency chain isn't linear, so I don't see why it couldn't be building some of the projects in parallel. I remember Joel Spolsky blogging about his new SSD, and how it didn't help with compile times, but he didn't mention which compiler he was using. We're using VS 2005. Anyone know how it's compilation works? And is it any different/better in 2008/2010?

    Read the article

  • What kind of data processing problems would CUDA help with?

    - by Chris McCauley
    Hi, I've worked on many data matching problems and very often they boil down to quickly and in parallel running many implementations of CPU intensive algorithms such as Hamming / Edit distance. Is this the kind of thing that CUDA would be useful for? What kinds of data processing problems have you solved with it? Is there really an uplift over the standard quad-core intel desktop? Chris

    Read the article

  • Where is my python script spending time? Is there "missing time" in my cprofile / pstats trace?

    - by fmark
    I am attempting to profile a long running python script. The script does some spatial analysis on raster GIS data set using the gdal module. The script currently uses three files, the main script which loops over the raster pixels called find_pixel_pairs.py, a simple cache in lrucache.py and some misc classes in utils.py. I have profiled the code on a moderate sized dataset. pstats returns: p.sort_stats('cumulative').print_stats(20) Thu May 6 19:16:50 2010 phes.profile 355483738 function calls in 11644.421 CPU seconds Ordered by: cumulative time List reduced from 86 to 20 due to restriction <20> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.008 0.008 11644.421 11644.421 <string>:1(<module>) 1 11064.926 11064.926 11644.413 11644.413 find_pixel_pairs.py:49(phes) 340135349 544.143 0.000 572.481 0.000 utils.py:173(extent_iterator) 8831020 18.492 0.000 18.492 0.000 {range} 231922 3.414 0.000 8.128 0.000 utils.py:152(get_block_in_bands) 142739 1.303 0.000 4.173 0.000 utils.py:97(search_extent_rect) 745181 1.936 0.000 2.500 0.000 find_pixel_pairs.py:40(is_no_data) 285478 1.801 0.000 2.271 0.000 utils.py:98(intify) 231922 1.198 0.000 2.013 0.000 utils.py:116(block_to_pixel_extent) 695766 1.990 0.000 1.990 0.000 lrucache.py:42(get) 1213166 1.265 0.000 1.265 0.000 {min} 1031737 1.034 0.000 1.034 0.000 {isinstance} 142740 0.563 0.000 0.909 0.000 utils.py:122(find_block_extent) 463844 0.611 0.000 0.611 0.000 utils.py:112(block_to_pixel_coord) 745274 0.565 0.000 0.565 0.000 {method 'append' of 'list' objects} 285478 0.346 0.000 0.346 0.000 {max} 285480 0.346 0.000 0.346 0.000 utils.py:109(pixel_coord_to_block_coord) 324 0.002 0.000 0.188 0.001 utils.py:27(__init__) 324 0.016 0.000 0.186 0.001 gdal.py:848(ReadAsArray) 1 0.000 0.000 0.160 0.160 utils.py:50(__init__) The top two calls contain the main loop - the entire analyis. The remaining calls sum to less than 625 of the 11644 seconds. Where are the remaining 11,000 seconds spent? Is it all within the main loop of find_pixel_pairs.py? If so, can I find out which lines of code are taking most of the time?

    Read the article

  • iPhone - Get a pointer to the data behind CGDataProvider?

    - by jtrim
    I'm trying to take a CGImage and copy its data into a buffer for later processing. The code below is what I have so far, but there's one thing I don't like about it - it's copying the image data twice. Once for CGDataProviderCopyData() and once for the :getBytes:length call on imgData. I haven't been able to find a way to copy the image data directly into my buffer and cut out the CGDataProviderCopyData() step, but there has to be a way...any pointers? (...pun ftw) NSData *imgData = (NSData *)(CGDataProviderCopyData(CGImageGetDataProvider(myCGImageRef))); CGImageRelease(myCGImageRef); // i've got a previously-defined pointer to an available buffer called "mybuff" [imgData getBytes:mybuff length:[imgData length]];

    Read the article

  • Explanation for expires header

    - by sushil bharwani
    I have a joomla application working on Apache.To improve site performace we have written a .htaccess file to root of the application with setting a far future expires header to all the static content. As desired first time the files load in fresh with 200 status code. when again click on the same link many of the files are served directly from cache. I need explanation for two things When i press f5 then a number of files load with 304 status code however i expected them to be coming directly from cache without hitting the server for a status header? When i close the browser and come back to the same page again i see the same thing happening a number of files load with 304 status code although i thought they will load directly from the browser cache? I understand that 304 also servs file from browser cache but i want to avoid the header communication between servers as my static files wont ever change. Also i want to add that my requests are over a https connection does that create any issue.

    Read the article

  • How to speed up the reading of innerHTML in IE8?

    - by Dennis Cheung
    I am using JQuery with the DataTable plugin, and now I have a big performnce issue on the following line. aLocalData[jInner] = nTds[j].innerHTML; // jquery.dataTables.js:2220 I have a ajax call, and result string in HTML format. I convert them into HTML nodes, and that part is ok. var $result = $('<div/>').html(result).find("*:first"); // simlar to $result=$(result) but much more faster in Fx Then I activate enable the result from a plain table to a sortable datatable. The speed is acceptable in Fx (around 4sec for 900 rows), but unacceptable in IE8 (more then 100 seconds). I check it deep using the buildin profiler, and found the above single line take all 99.9% of the time, how can I speed it up? anything I missed? nTrs = oSettings.nTable.getElementsByTagName('tbody')[0].childNodes; for ( i=0, iLen=nTrs.length ; i<iLen ; i++ ) { if ( nTrs[i].nodeName == "TR" ) { iThisIndex = oSettings.aoData.length; oSettings.aoData.push( { "nTr": nTrs[i], "_iId": oSettings.iNextId++, "_aData": [], "_anHidden": [], "_sRowStripe": '' } ); oSettings.aiDisplayMaster.push( iThisIndex ); aLocalData = oSettings.aoData[iThisIndex]._aData; nTds = nTrs[i].childNodes; jInner = 0; for ( j=0, jLen=nTds.length ; j<jLen ; j++ ) { if ( nTds[j].nodeName == "TD" ) { aLocalData[jInner] = nTds[j].innerHTML; // jquery.dataTables.js:2220 jInner++; } } } }

    Read the article

  • XDocument holding onto Memory?

    - by Jon
    I have an appplication that does a XDocument.Load from a 20mb file and then gets passed to a form to view its contents: openFileDialog1.FileName = ""; if (openFileDialog1.ShowDialog() == DialogResult.OK) { AuditFile = XDocument.Load(openFileDialog1.FileName); fmAuditLogViewer AuditViewer = new fmAuditLogViewer(); AuditViewer.ReportDocument = AuditFile; AuditViewer.Init(); AuditViewer.ShowDialog(); AuditViewer.Dispose(); AuditFile.RemoveNodes(); AuditFile = null; } In Task Manager I can see the memory being used by my application shoot up when I open this file. When I have finished viewing this file in my application I call : myXDocument.RemoveNodes(); myXDocument = null; However the memory use in Task Manager is still pretty high against my app. Is the XDocument still being held in memory and can I decrease the memory usage by my app?

    Read the article

  • How can "set timestamp" be a slow query?

    - by Peder
    My slow query log is full of entries like the following # Query_time: 1.016361 Lock_time: 0.000000 Rows_sent: 0 Rows_examined: 0 SET timestamp=1273826821; COMMIT; I guess the set timestamp command is issued by replication but I don't understand how set timestamp can take over a second. Any ideas?

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >