Search Results

Search found 26978 results on 1080 pages for 'load testing'.

Page 487/1080 | < Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >

  • Why does sound stop working after a while?

    - by badp
    I don't know how to reproduce this problem, because I don't regularly play music or sound. All I know is that, sometimes, I'll load a video (from youtube or from a local file) and there will be no sound. Everything looks fine software wise: Rebooting always fixes. aplay, paplay and pals give no error message I'm not in the audio group, as advised The device exists and appears in use: $ lsof /dev/snd/by-path/pci-0000\:00\:1b.0 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME pulseaudi 17313 badp 23u CHR 116,10 0t0 7628 /dev/snd/by-path/../controlC0 pulseaudi 17313 badp 30u CHR 116,10 0t0 7628 /dev/snd/by-path/../controlC0 Restarting pulseaudio or alsa seems to do no good. What is wrong here?

    Read the article

  • Why does Live-USB boot freeze at the moving dots?

    - by Caleb
    I am currently trying to boot into Ubuntu 12.04 x86 using a USB. I used Untetbootin to do all that boot crap and what not. But for some reason Ubuntu wont load pass that screen with the dots and the purple background. What happens is the dots all light up once, then they go back to white then they all light up at the same time and then it just gets stuck there. Ive even tried using an x64 copy (since my CPU supports it) but have no luck with that either. Ive even tried different DISTRIBUTIONS of Linux and all have failed with loading. Please help

    Read the article

  • What is a correct step by step logic of exporting scene with baked occlusion for loading it at runtime?

    - by myWallJSON
    I wonder what is a correct step by step logic of exporting scene with baked occlusion (Culling data) for loading that scene at runtime (on fly from the internet for example))? So currently my plan looks like this: I create prefabs Place them onto my scene (into Hierarchy) (say create 20 buffolows and some hourses and some buildings) Create empty prefab and drag all my scene objects from hierarchy onto it Export prefab So generally I put all my scene objects into one large prefab and export it but it seems that all objects that were marked as static get this property turned off when loading them at runtime and so no Frustrum Culling, and no Occlusion culling happens. So I wonder what is a correct way of exporting Sceen + Objecrts + Occlusion (and onther culing) data for future load of such scene at runtime? I wonder about current 3.5.2 Pro and future 4 Pro versions of U3D.

    Read the article

  • Why is the dash so unresponsive, and is there a way to fix this?

    - by Jon
    I just upgraded to 12.04. When I press the super key to open the dash, there's a lag of 1-3 seconds before it displays, with no other programs running. (This is similar, but not identical, to the issue described in Dash application search unresponsive at startup about 11.10.) At login time, this lag is up to 10 seconds, and sometimes the dash doesn't respond at all to the super key. In contrast, the launcher Kupfer immediately responds to its hotkey, in milliseconds, and responds to my typing an application name also in fractions of a second. Is there a way to load the dash in memory or a RAM disk of some sort to make it more responsive?

    Read the article

  • Resize content for "frame" on .aspx page which draws product content from suppliers' page

    - by zenbike
    I have a retail store site, no online sales, which displays the webpage of our supplier in a "frame" in order to have the most accurate and up to date information for our customers. (example) My issue is that the size of the page it is pulling in doesn't fit in the frame. It looks pretty poor, and part of the content is obscured. Is there a way to scale the content drawn in to the size of the frame? The same site also has an intermittent issue with the Flash banner loading. When it doesn't load, the layout of the header on the page is awful. Any ideas there will also be appreciated.

    Read the article

  • IDirect3DDevice9Ex and D3DPOOL_MANAGED?

    - by bluescrn
    So I wanted to switch to IDirect3DDevice9Ex, purely for the SetFrameLatency function, as fullscreen vsynced D3D seemed to produce noticable input lag. But then it tells me 'ha ha ha! now you can't use D3DPOOL_MANAGED!': Direct3D9: (ERROR) :D3DPOOL_MANAGED is not valid with IDirect3DDevice9Ex Is this really as unpleasant as it looks (when you're relying quite heavily on managed resources) - or is there a simple solution? If it really does mean manual management of everything (reloading all static textures, VBs, and IBs on a device reset), is it worth the hassle, will IDirect3DDevice9Ex bring enough benefit to make it worth writing a new resource manager? Starting to think I must be doing something wrong, due to this: Direct3D9: (ERROR) :Lock is not supported for textures allocated with POOL_DEFAULT unless they are marked D3DUSAGE_DYNAMIC. So if I put my (static) textures in POOL_DEFAULT, they need flagging as D3DUSAGE_DYNAMIC, just because I lock them once to load the data in?

    Read the article

  • Unity iOS optimization and draw calls

    - by vzm
    I am curious of what methods I should approach in optimizing my Unity project for iOS hardware. I have very little image effects running (directional light with low res shadows) and I used the combine children script from the standard assets to lessen the load on the CPU. My project currently runs with 45-57 draw calls at non-intensive segments and up to 178 at intensive segments. I heard that static batching relieves some of the stress, but the game has the environment moving around the player instead of the player moving around the environment. Is there any alternative that I may look towards to improving the draw call number?

    Read the article

  • Ogre3D : seeking advices about game files management

    - by Tibor
    I'm working on a new game, and its related level editor, based on Ogre3D. I was thinking about how i could manage the game files, knowing that Ogre use .mesh files for models, .material for materials/texture information etc... . At first i thought about a common .zip folder decompressed at runtime (the same way Torchlight and Ogre samples do). But this way the game assets become a monolithic archive, loading takes time, and could be difficult to eventually patch them. So, let's say i have a game object named "Cube" i want to load in my program. Going for modularity, what if i create a compressed file (using zlib compression routines) named Cube.extname, containing its sub-files Cube.mesh, Cube.material and so on ? Are there any alternatives or should i stick with compressed objects? PS: Just to clear things, the answer is unrelated to my program code, at the moment i'm using "resources.cfg" pointing to the OgreSDK media directory.

    Read the article

  • Introducing the new Demantra Guided Resolutions!

    - by user702295
    There is a new method to find your solution called guided resolution or search helper. Currently, we cover 5 specific topical areas: Oracle Demantra 7.x and Above Data Loading Guided Resolution Note 1461899.1 Oracle Demantra Load Data Issues Not Necessarily Error Code Related Guided Resolution Note 1468884.1 Oracle Demantra 7.x and Above Workflow Issues Guided Resolution Note 1353217.1 Oracle Demantra 7.x and Above Worksheet Related Guided Resolution Note 1486639.1 Oracle Demantra 7.x and Above Engine Related Guided Resolution Note 1486634.1 The guides will be updated with the latest proven solutions on a regular basis, keeping the contect relevant. What is a guided resolution?   What is the best practice using a guided resolution? How to Use the Demantra Guided Resolution, a Proactive Services Tutorial.  Note 1473992.1

    Read the article

  • Ubuntu 12.04 desktop crash [closed]

    - by David Mannock
    12.04 is stable under a light load, but not under intensive use of Gaussian 09 vB1. Looks like a heating issue, but psensor says that all is well on 32-cores @56C. Similar results for 64 cores. Machine shuts down after 2-3h. Syslog shows shut down. Whoopsie crash reporter sends in report. After 259 updates on the weekend, I am left wondering what the heck is wrong with this release? My answer would be "EVERYTHING!". Can someone help me do some systematic checks on this OS and hardware.

    Read the article

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • Javascript naming conventions

    - by ManuPK
    I am from Java background and am new to JavaScript. I have noticed many JavaScript methods using single character parameter names, such as in the following example. doSomething(a,b,c) I don't like it, but a fellow JavaScript developer convinced me that this is done to reduce the file size, noting that JavaScript files have to be transferred to the browser. Then I found myself talking to another developer. He showed me the way that Firefox will truncate variable names to load the page faster. Is this a standard practice for web browsers? What are the best-practice naming conversions that should be followed when programming in JavaScript? Does identifier length matter, and if so, to what extent?

    Read the article

  • ROracle support for TimesTen In-Memory Database

    - by Sherry LaMonica
    Today's guest post comes from Jason Feldhaus, a Consulting Member of Technical Staff in the TimesTen Database organization at Oracle.  He shares with us a sample session using ROracle with the TimesTen In-Memory database.  Beginning in version 1.1-4, ROracle includes support for the Oracle Times Ten In-Memory Database, version 11.2.2. TimesTen is a relational database providing very fast and high throughput through its memory-centric architecture.  TimesTen is designed for low latency, high-volume data, and event and transaction management. A TimesTen database resides entirely in memory, so no disk I/O is required for transactions and query operations. TimesTen is used in applications requiring very fast and predictable response time, such as real-time financial services trading applications and large web applications. TimesTen can be used as the database of record or as a relational cache database to Oracle Database. ROracle provides an interface between R and the database, providing the rich functionality of the R statistical programming environment using the SQL query language. ROracle uses the OCI libraries to handle database connections, providing much better performance than standard ODBC.The latest ROracle enhancements include: Support for Oracle TimesTen In-Memory Database Support for Date-Time using R's POSIXct/POSIXlt data types RAW, BLOB and BFILE data type support Option to specify number of rows per fetch operation Option to prefetch LOB data Break support using Ctrl-C Statement caching support Times Ten 11.2.2 contains enhanced support for analytics workloads and complex queries: Analytic functions: AVG, SUM, COUNT, MAX, MIN, DENSE_RANK, RANK, ROW_NUMBER, FIRST_VALUE and LAST_VALUE Analytic clauses: OVER PARTITION BY and OVER ORDER BY Multidimensional grouping operators: Grouping clauses: GROUP BY CUBE, GROUP BY ROLLUP, GROUP BY GROUPING SETS Grouping functions: GROUP, GROUPING_ID, GROUP_ID WITH clause, which allows repeated references to a named subquery block Aggregate expressions over DISTINCT expressions General expressions that return a character string in the source or a pattern within the LIKE predicate Ability to order nulls first or last in a sort result (NULLS FIRST or NULLS LAST in the ORDER BY clause) Note: Some functionality is only available with Oracle Exalytics, refer to the TimesTen product licensing document for details. Connecting to TimesTen is easy with ROracle. Simply install and load the ROracle package and load the driver. > install.packages("ROracle") > library(ROracle) Loading required package: DBI > drv <- dbDriver("Oracle") Once the ROracle package is installed, create a database connection object and connect to a TimesTen direct driver DSN as the OS user. > conn <- dbConnect(drv, username ="", password="", dbname = "localhost/SampleDb_1122:timesten_direct") You have the option to report the server type - Oracle or TimesTen? > print (paste ("Server type =", dbGetInfo (conn)$serverType)) [1] "Server type = TimesTen IMDB" To create tables in the database using R data frame objects, use the function dbWriteTable. In the following example we write the built-in iris data frame to TimesTen. The iris data set is a small example data set containing 150 rows and 5 columns. We include it here not to highlight performance, but so users can easily run this example in their R session. > dbWriteTable (conn, "IRIS", iris, overwrite=TRUE, ora.number=FALSE) [1] TRUE Verify that the newly created IRIS table is available in the database. To list the available tables and table columns in the database, use dbListTables and dbListFields, respectively. > dbListTables (conn) [1] "IRIS" > dbListFields (conn, "IRIS") [1] "SEPAL.LENGTH" "SEPAL.WIDTH" "PETAL.LENGTH" "PETAL.WIDTH" "SPECIES" To retrieve a summary of the data from the database we need to save the results to a local object. The following call saves the results of the query as a local R object, iris.summary. The ROracle function dbGetQuery is used to execute an arbitrary SQL statement against the database. When connected to TimesTen, the SQL statement is processed completely within main memory for the fastest response time. > iris.summary <- dbGetQuery(conn, 'SELECT SPECIES, AVG ("SEPAL.LENGTH") AS AVG_SLENGTH, AVG ("SEPAL.WIDTH") AS AVG_SWIDTH, AVG ("PETAL.LENGTH") AS AVG_PLENGTH, AVG ("PETAL.WIDTH") AS AVG_PWIDTH FROM IRIS GROUP BY ROLLUP (SPECIES)') > iris.summary SPECIES AVG_SLENGTH AVG_SWIDTH AVG_PLENGTH AVG_PWIDTH 1 setosa 5.006000 3.428000 1.462 0.246000 2 versicolor 5.936000 2.770000 4.260 1.326000 3 virginica 6.588000 2.974000 5.552 2.026000 4 <NA> 5.843333 3.057333 3.758 1.199333 Finally, disconnect from the TimesTen Database. > dbCommit (conn) [1] TRUE > dbDisconnect (conn) [1] TRUE We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for our software: Times Ten In-Memory Database,  ROracle.  As always, we welcome comments and questions on the TimesTen and  Oracle R technical forums.

    Read the article

  • Oracle NoSQL Database Exceeds 1 Million Mixed YCSB Ops/Sec

    - by Charles Lamb
    We ran a set of YCSB performance tests on Oracle NoSQL Database using SSD cards and Intel Xeon E5-2690 CPUs with the goal of achieving 1M mixed ops/sec on a 95% read / 5% update workload. We used the standard YCSB parameters: 13 byte keys and 1KB data size (1,102 bytes after serialization). The maximum database size was 2 billion records, or approximately 2 TB of data. We sized the shards to ensure that this was not an "in-memory" test (i.e. the data portion of the B-Trees did not fit into memory). All updates were durable and used the "simple majority" replica ack policy, effectively 'committing to the network'. All read operations used the Consistency.NONE_REQUIRED parameter allowing reads to be performed on any replica. In the past we have achieved 100K ops/sec using SSD cards on a single shard cluster (replication factor 3) so for this test we used 10 shards on 15 Storage Nodes with each SN carrying 2 Rep Nodes and each RN assigned to its own SSD card. After correcting a scaling problem in YCSB, we blew past the 1M ops/sec mark with 8 shards and proceeded to hit 1.2M ops/sec with 10 shards.  Hardware Configuration We used 15 servers, each configured with two 335 GB SSD cards. We did not have homogeneous CPUs across all 15 servers available to us so 12 of the 15 were Xeon E5-2690, 2.9 GHz, 2 sockets, 32 threads, 193 GB RAM, and the other 3 were Xeon E5-2680, 2.7 GHz, 2 sockets, 32 threads, 193 GB RAM.  There might have been some upside in having all 15 machines configured with the faster CPU, but since CPU was not the limiting factor we don't believe the improvement would be significant. The client machines were Xeon X5670, 2.93 GHz, 2 sockets, 24 threads, 96 GB RAM. Although the clients had 96 GB of RAM, neither the NoSQL Database or YCSB clients require anywhere near that amount of memory and the test could have just easily been run with much less. Networking was all 10GigE. YCSB Scaling Problem We made three modifications to the YCSB benchmark. The first was to allow the test to accommodate more than 2 billion records (effectively int's vs long's). To keep the key size constant, we changed the code to use base 32 for the user ids. The second change involved to the way we run the YCSB client in order to make the test itself horizontally scalable.The basic problem has to do with the way the YCSB test creates its Zipfian distribution of keys which is intended to model "real" loads by generating clusters of key collisions. Unfortunately, the percentage of collisions on the most contentious keys remains the same even as the number of keys in the database increases. As we scale up the load, the number of collisions on those keys increases as well, eventually exceeding the capacity of the single server used for a given key.This is not a workload that is realistic or amenable to horizontal scaling. YCSB does provide alternate key distribution algorithms so this is not a shortcoming of YCSB in general. We decided that a better model would be for the key collisions to be limited to a given YCSB client process. That way, as additional YCSB client processes (i.e. additional load) are added, they each maintain the same number of collisions they encounter themselves, but do not increase the number of collisions on a single key in the entire store. We added client processes proportionally to the number of records in the database (and therefore the number of shards). This change to the use of YCSB better models a use case where new groups of users are likely to access either just their own entries, or entries within their own subgroups, rather than all users showing the same interest in a single global collection of keys. If an application finds every user having the same likelihood of wanting to modify a single global key, that application has no real hope of getting horizontal scaling. Finally, we used read/modify/write (also known as "Compare And Set") style updates during the mixed phase. This uses versioned operations to make sure that no updates are lost. This mode of operation provides better application behavior than the way we have typically run YCSB in the past, and is only practical at scale because we eliminated the shared key collision hotspots.It is also a more realistic testing scenario. To reiterate, all updates used a simple majority replica ack policy making them durable. Scalability Results In the table below, the "KVS Size" column is the number of records with the number of shards and the replication factor. Hence, the first row indicates 400m total records in the NoSQL Database (KV Store), 2 shards, and a replication factor of 3. The "Clients" column indicates the number of YCSB client processes. "Threads" is the number of threads per process with the total number of threads. Hence, 90 threads per YCSB process for a total of 360 threads. The client processes were distributed across 10 client machines. Shards KVS Size Clients Mixed (records) Threads OverallThroughput(ops/sec) Read Latencyav/95%/99%(ms) Write Latencyav/95%/99%(ms) 2 400m(2x3) 4 90(360) 302,152 0.76/1/3 3.08/8/35 4 800m(4x3) 8 90(720) 558,569 0.79/1/4 3.82/16/45 8 1600m(8x3) 16 90(1440) 1,028,868 0.85/2/5 4.29/21/51 10 2000m(10x3) 20 90(1800) 1,244,550 0.88/2/6 4.47/23/53

    Read the article

  • Network strange problem

    - by Ali
    I have a CPanel server with 5 IPs and a few domains. During night, access to main domain through HTTP return 324 no response many times. After several refresh it comes but many assets won't load and return 324. Using HTTPS is fine. During day HTTP is also fine. But another domain on that very sever works fine all the day through HTTP. The server DNS are ns1 and ns2 of the first domain. Second domain is on the shared IP and first domain has a dedicated IP. I cant resolve the problem :( and appreciate any help so much!

    Read the article

  • Unity Desktop not loading and ccsm method not working

    - by slimy-spork
    I installed Ubuntu 13.04 x86_64 on my HP Pavillion DV7-6c60us. It installs fine and works until I update. I run the software updater and reboot as it requests after. When my computer comes back on I get the desktop background but the rest of the Unity desktop doesn't load. I've tried the ccsm method and re-enabling but it just de-enables itself. I've also tried installing the gnome desktop but that does nothing for me either. I really want to switch to Ubuntu but this is causing issues. P.S. I've also tried using 12.10 and 12.04 with no dice.

    Read the article

  • Simple alternating text in Visual Basic 2008

    - by Josh Grate
    I have a simple completed program, but i would like to add one more feature to it but I'm not sure how. I have it set up to send a message automatically every 7 seconds when a text field is selected, repeating the message of course. What I would like for it to do is alternate between two separate messages, instead just repeating the one. I would like the new program to post at an interval of 12 seconds. Can you help me? Here is my coding. Public Class Form1 Private Sub Timer1_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Timer1.Tick SendKeys.Send(TextBox1.Text) SendKeys.Send("{ENTER}") End Sub Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Timer1.Enabled = True Timer1.Interval = (TextBox2.Text) End Sub Private Sub Button2_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button2.Click Timer1.Enabled = False End Sub Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load Timer1.Enabled = False End Sub End Class

    Read the article

  • SQL Server - Rebuilding Indexes

    - by Renso
    Goal: Rebuild indexes in SQL server. This can be done one at a time or with the example script below to rebuild all index for a specified table or for all tables in a given database. Why? The data in indexes gets fragmented over time. That means that as the index grows, the newly added rows to the index are physically stored in other sections of the allocated database storage space. Kind of like when you load your Christmas shopping into the trunk of your car and it is full you continue to load some on the back seat, in the same way some storage buffer is created for your index but once that runs out the data is then stored in other storage space and your data in your index is no longer stored in contiguous physical pages. To access the index the database manager has to "string together" disparate fragments to create the full-index and create one contiguous set of pages for that index. Defragmentation fixes that. What does the fragmentation affect?Depending of course on how large the table is and how fragmented the data is, can cause SQL Server to perform unnecessary data reads, slowing down SQL Server’s performance.Which index to rebuild?As a rule consider that when reorganize a table's clustered index, all other non-clustered indexes on that same table will automatically be rebuilt. A table can only have one clustered index.How to rebuild all the index for one table:The DBCC DBREINDEX command will not automatically rebuild all of the indexes on a given table in a databaseHow to rebuild all indexes for all tables in a given database:USE [myDB]    -- enter your database name hereDECLARE @tableName varchar(255)DECLARE TableCursor CURSOR FORSELECT table_name FROM information_schema.tablesWHERE table_type = 'base table'OPEN TableCursorFETCH NEXT FROM TableCursor INTO @tableNameWHILE @@FETCH_STATUS = 0BEGINDBCC DBREINDEX(@tableName,' ',90)     --a fill factor of 90%FETCH NEXT FROM TableCursor INTO @tableNameENDCLOSE TableCursorDEALLOCATE TableCursorWhat does this script do?Reindexes all indexes in all tables of the given database. Each index is filled with a fill factor of 90%. While the command DBCC DBREINDEX runs and rebuilds the indexes, that the table becomes unavailable for use by your users temporarily until the rebuild has completed, so don't do this during production  hours as it will create a shared lock on the tables, although it will allow for read-only uncommitted data reads; i.e.e SELECT.What is the fill factor?Is the percentage of space on each index page for storing data when the index is created or rebuilt. It replaces the fill factor when the index was created, becoming the new default for the index and for any other nonclustered indexes rebuilt because a clustered index is rebuilt. When fillfactor is 0, DBCC DBREINDEX uses the fill factor value last specified for the index. This value is stored in the sys.indexes catalog view. If fillfactor is specified, table_name and index_name must be specified. If fillfactor is not specified, the default fill factor, 100, is used.How do I determine the level of fragmentation?Run the DBCC SHOWCONTIG command. However this requires you to specify the ID of both the table and index being. To make it a lot easier by only requiring you to specify the table name and/or index you can run this script:DECLARE@ID int,@IndexID int,@IndexName varchar(128)--Specify the table and index namesSELECT @IndexName = ‘index_name’    --name of the indexSET @ID = OBJECT_ID(‘table_name’)  -- name of the tableSELECT @IndexID = IndIDFROM sysindexesWHERE id = @ID AND name = @IndexName--Show the level of fragmentationDBCC SHOWCONTIG (@id, @IndexID)Here is an example:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 915- Extents Scanned..............................: 119- Extent Switches..............................: 281- Avg. Pages per Extent........................: 7.7- Scan Density [Best Count:Actual Count].......: 40.78% [115:282]- Logical Scan Fragmentation ..................: 16.28%- Extent Scan Fragmentation ...................: 99.16%- Avg. Bytes Free per Page.....................: 2457.0- Avg. Page Density (full).....................: 69.64%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's important here?The Scan Density; Ideally it should be 100%. As time goes by it drops as fragmentation occurs. When the level drops below 75%, you should consider re-indexing.Here are the results of the same table and clustered index after running the script:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 692- Extents Scanned..............................: 87- Extent Switches..............................: 86- Avg. Pages per Extent........................: 8.0- Scan Density [Best Count:Actual Count].......: 100.00% [87:87]- Logical Scan Fragmentation ..................: 0.00%- Extent Scan Fragmentation ...................: 22.99%- Avg. Bytes Free per Page.....................: 639.8- Avg. Page Density (full).....................: 92.10%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's different?The Scan Density has increased from 40.78% to 100%; no fragmentation on the clustered index. Note that since we rebuilt the clustered index, all other index were also rebuilt.

    Read the article

  • ASP.NET 4.0 and the Entity Framework 4 - Part 4 - A 3 Layered Approach to the Entity Framework

    In this article, Vince suggests a pattern to use when developing a three layered application using the Entity Framework 4. After providing a short introduction he demonstrates the creation of the database, data access layer, business logic layer, and a web form. He does so with the help of detailed explanations, source code examples and related screenshots. He also examines how to select records to load a Drop Down List, including adding, editing and deleting records.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Suspend/hibernate doesn't work on an Asus Laptop

    - by b1ackjosh
    I'm having issues suspending 11.04 on my new Asus U30sd-XA1 laptop, it's a new laptop on the market and I'm wondering if anyone else has had this issue and maybe have a fix for it. Basically whenever I close or put the laptop to sleep the screen goes black but the video card doesn't actually turn off, then the fan spins up even faster then it was before and gets hot. I did see that some people were having similar issues on other laptops and they set the kernal back. I'm newish to ubuntu and I'm not super comfortable messing with the kernal. If anyone has any other solutions that would be great. I am not using the 520m nvidia driver because it won't load unity after it's installed. I've also heard quite a few bad things about the drivers on the ubuntu forums, so I deactivated the driver.

    Read the article

  • Question about Partitioning

    - by Trent C
    I am looking to dual boot Windows 7 and Ubuntu 13.10. I have been using windows for work and school for over a year, and have about 100 gig of stored files (backed up of course) and some paid programs. Because of this, I really want my partitioning experience to go well. Unfortunately, I am running into a bit of an anomoly When I load GPart, I see that my sda drive is unallocated http://i.imgur.com/Hi2XhIr.png Whereas my sdb appears to contain all of the windows files and partitions, and make up my C: drive http://i.imgur.com/aaCOXje.png Is this going to be an issue, as all literature on dual boot installation references sda? How do I work around it? System Info: Lenovo IdeaPad Y570- 750GB HDD with 64GB SSD Processor: Intel® Core™ i7-2670QM CPU @ 2.20GHz × 8

    Read the article

  • VLC doesnot play any file [ any video/.mp3] in local machine , closes when a file is opened

    - by hsemarap
    When I run vlc from terminal I get: In the VLC dialog box: Your input can't be opened: VLC is unable to open the MRL 'file:///media/Ent/movies/the%20mask.avi'. Check the log for details. In terminal: VLC media player 2.0.1 Twoflower (revision 2.0.1-0-gf432547) [0x8fe8f8] main input error: open of `file/xspf-open:///home/para/.local/share/vlc/ml.xspf' failed [0x8fe8f8] main input error: Your input can't be opened [0x8fe8f8] main input error: VLC is unable to open the MRL 'file/xspf-open:///home/para/.local/share/vlc/ml.xspf'. Check the log for details. [0x8f1aa8] main interface error: no suitable interface module [0x8f9b08] main interface error: no suitable interface module [0x8be008] main libvlc error: option http-user-agent does not exist [0x8be008] main libvlc: Running vlc with the default interface. Use 'cvlc' to use vlc without interface. [0x8f9b08] qt4 interface error: Unable to load extensions module [0x7f5280000b78] main input error: open of `file:///media/Ent/movies/the%20mask.avi' failed [0x8fc718] main playlist error: could not export playlist

    Read the article

  • Boot problem with Ubuntu 11.10

    - by leonard
    So, I am an Ubuntu newbie, and I don't know a thing about it. I tried to install restricted extras (am I using it right?) through terminal, but it seems I failed :) maybe because during installation the internet connection crashed... anyway, I restarted and I got the Ubuntu intro screen at the boot, and then a black screen appeared with this information: checking battery state I restarted and it showed something else, so I rebooted for 5 times. Then a screen appeared *starting bluetooth *pulseaudio configured for per-user sessions saned disabled; edit /etc/default/saned I tried this: My fresh installation doesn't load. (PulseAudio problem) but didn't know how to save so I just skipped to update-grub, anyway nothing helped. What do I do?

    Read the article

  • Do you use to third party companies to review your company's code?

    - by CodeToGlory
    I am looking to get the following - Basic code review to make sure they follow the guidelines imposed. Security code analysis to make sure there are no loopholes. No performance bottlenecks by doing a load test etc. We have lot of code coming in from third parties and is becoming laborious to manage code reviews and hence looking to see if others employ such practices. I understand that it may be a concern for some and would raise the question "Well, who is going to make sure the agency is doing their job right?" But basically I am just looking for a third party who can hold all vendor code to the same standards.

    Read the article

  • Cannot boot Win7 after updating to ubuntu 11.10

    - by angryInsomniac
    I had a dual boot setup on my system, with ubuntu 11.04 and Win7. Yesterday I updated ubuntu to 11.10. Now , I cannot seem to be able to load Win7 , I do see an entry for it on the boot menu. I select that entry and then the screen goes black , with everything coming to a standstill. I tried to wait for it to respond but nothing happened. I went through the forums and tried "grub-update" but when I do that , nothing really happens. I am not sure of this , but I get no output from grub for upto 10 minutes ( can grub-update be made verbose ) How can I fix this and get back to booting both OS'es properly ?

    Read the article

< Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >