Search Results

Search found 78653 results on 3147 pages for 'performance object name s'.

Page 259/3147 | < Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >

  • Performance difference between functions and pattern matching in Mathematica

    - by Samsdram
    So Mathematica is different from other dialects of lisp because it blurs the lines between functions and macros. In Mathematica if a user wanted to write a mathematical function they would likely use pattern matching like f[x_]:= x*x instead of f=Function[{x},x*x] though both would return the same result when called with f[x]. My understanding is that the first approach is something equivalent to a lisp macro and in my experience is favored because of the more concise syntax. So I have two questions, is there a performance difference between executing functions versus the pattern matching/macro approach? Though part of me wouldn't be surprised if functions were actually transformed into some version of macros to allow features like Listable to be implemented. The reason I care about this question is because of the recent set of questions (1) (2) about trying to catch Mathematica errors in large programs. If most of the computations were defined in terms of Functions, it seems to me that keeping track of the order of evaluation and where the error originated would be easier than trying to catch the error after the input has been rewritten by the successive application of macros/patterns.

    Read the article

  • Issue pushing object into an array JS

    - by Javacadabra
    I'm having an issue placing an object into my array in javascript. This is the code: $('.confirmBtn').click(function(){ //Get reference to the Value in the Text area var comment = $("#comments").val(); //Create Object var orderComment = { 'comment' : comment }; //Add Object to the Array productArray.push(orderComment); //update cookie $.cookie('order_cookie', JSON.stringify(productArray), { expires: 1, path: '/' }); }); However when I print the array this is the output: Array ( [0] => Array ( [stockCode] => CBL202659/A [quantity] => 8 ) [1] => Array ( [stockCode] => CBL201764 [quantity] => 6 ) [2] => TEST TEST ) I would like it to look like this: Array ( [0] => Array ( [stockCode] => CBL202659/A [quantity] => 8 ) [1] => Array ( [stockCode] => CBL201764 [quantity] => 6 ) [2] Array( [comment] => TEST TEST ) I added products to the array in a similar way and it worked fine: var productArray = []; // Will hold order Items $(".orderBtn").click(function(event){ //Check to ensure quantity > 0 if(quantity == 0){ console.log("Quantity must be greater than 0") }else{//It is so continue //Show the order Box $(".order-alert").show(); event.preventDefault(); //Get reference to the product clicked var stockCode = $(this).closest('li').find('.stock_code').html(); //Get reference to the quantity selected var quantity = $(this).closest('li').find('.order_amount').val(); //Order Item (contains stockCode and Quantity) - Can add whatever data I like here var orderItem = { 'stockCode' : stockCode, 'quantity' : quantity }; //Check if cookie exists if($.cookie('order_cookie') === undefined){ console.log("Creating new cookie"); //Add object to the Array productArray.push(orderItem);

    Read the article

  • How to output multiple rows from an SQL query using the mysqli object

    - by Jonathan
    Assuming that the mysqli object is already instantiatied (and connected) with the global variable $mysql, here is the code I am trying to work with. class Listing { private $mysql; function getListingInfo($l_id = "", $category = "", $subcategory = "", $username = "", $status = "active") { $condition = "`status` = '$status'"; if (!empty($l_id)) $condition .= "AND `L_ID` = '$l_id'"; if (!empty($category)) $condition .= "AND `category` = '$category'"; if (!empty($subcategory)) $condition .= "AND `subcategory` = '$subcategory'"; if (!empty($username)) $condition .= "AND `username` = '$username'"; $result = $this->mysql->query("SELECT * FROM listing WHERE $condition") or die('Error fetching values'); $this->listing = $result->fetch_array() or die('could not create object'); foreach ($this->listing as $key => $value) : $info[$key] = stripslashes(html_entity_decode($value)); endforeach; return $info; } } there are several hundred listings in the db and when I call $result-fetch_array() it places in an array the first row in the db. however when I try to call the object, I can't seem to access more than the first row. for instance: $listing_row = new Listing; while ($listing = $listing_row-getListingInfo()) { echo $listing[0]; } this outputs an infinite loop of the same row in the db. Why does it not advance to the next row? if I move the code: $this->listing = $result->fetch_array() or die('could not create object'); foreach ($this->listing as $key => $value) : $info[$key] = stripslashes(html_entity_decode($value)); endforeach; if I move this outside the class, it works exactly as expected outputting a row at a time while looping through the while statement. Is there a way to write this so that I can keep the fetch_array() call in the class and still loop through the records?

    Read the article

  • How to convert object to string list?

    - by user1381501
    I want to get two values by using linq select query and try to convert object to string list. I am trying to convert list to list. The code as below. I got the error when I convert object to string list : return returnvalue = (List)userlist; public List<string> GetUserList(string username) { List<User> UserList = new List<User>(); List<string> returnvalue=new List<string>(); try { string returnstring = string.Empty; DataTable dt = null; dt = Library.Helper.FindUser(username, 200); foreach (DataRow dr in dt.Rows) { User user = new User(); spuser.id = dr["ID"].ToString(); spuser.name = dr["Name"].ToString(); UserList.Adduser } } catch (Exception ex) { } List<SharePointMentoinUser> userlist = UserList.Select(a => new User { name = (string)a.name, id = (string)a.id }).ToList(); **return returnvalue = (List<string>)userlist;** }

    Read the article

  • MySQLi Wrapper -- will this slow down performance?

    - by Kerry
    I found the following code on php.net. I'm trying to write a wrapper for the MySQLi library to make things incredibly simple. If this is going to slow down performance, I'll skip it and find another way, if this works, then I'll do that. I have a single query function, if someone passes in more than one variable, I assume the function has to be prepared. The function that I would use to pass in an array to mysqli_stmt_bind_param is call_user_func_array, I have a feeling that is going to slow things down. Am I right? <?php /* just explaining how to call mysqli_stmt_bind_param with a parameter array */ $sql_link = mysqli_connect('localhost', 'my_user', 'my_password', 'world'); $type = "isssi"; $param = array("5", "File Description", "File Title", "Original Name", time()); $sql = "INSERT INTO file_detail (file_id, file_description, file_title, file_original_name, file_upload_date) VALUES (?, ?, ?, ?, ?)"; $sql_stmt = mysqli_prepare ($sql_link, $sql); call_user_func_array('mysqli_stmt_bind_param', array_merge (array($sql_stmt, $type), $param); mysqli_stmt_execute($sql_stmt); ?>

    Read the article

  • overload == (and != , of course) operator, can I bypass == to determine whether the object is null

    - by LLS
    Hello, when I try to overload operator == and != in C#, and override Equal as recommended, I found I have no way to distinguish a normal object and null. For example, I defined a class Complex. public static bool operator ==(Complex lhs, Complex rhs) { return lhs.Equals(rhs); } public static bool operator !=(Complex lhs, Complex rhs) { return !lhs.Equals(rhs); } public override bool Equals(object obj) { if (obj is Complex) { return (((Complex)obj).Real == this.Real && ((Complex)obj).Imaginary == this.Imaginary); } else { return false; } } But when I want to use if (temp == null) When temp is really null, some exception happens. And I can't use == to determine whether the lhs is null, which will cause infinite loop. What should I do in this situation. One way I can think of is to us some thing like Class.Equal(object, object) (if it exists) to bypass the == when I do the check. What is the normal way to solve the problem?

    Read the article

  • Call to a member function find() on a non-object simpleHTMLDOM PHP

    - by wandersolo
    I am trying to read a link from one page, print the url, go to that page, and read the link on the next page in the same location, print the url, go to that page (and so on...) All I'm doing is reading the url and passing it as an argument to the get_links() method until there are no more links. This is my code but it throws a Fatal error: Call to a member function find() on a non-object. Anyone know what's wrong? <?php $mainPage = 'https://www.bu.edu/link/bin/uiscgi_studentlink.pl/1346752597?ModuleName=univschr.pl&SearchOptionDesc=Class+Subject&SearchOptionCd=C&KeySem=20133&ViewSem=Fall+2012&Subject=&MtgDay=&MtgTime='; get_links($mainPage); function get_links($url) { $data = new simple_html_dom(); $data = file_get_html($url); $nodes = $data->find("input[type=hidden]"); $fURL=$data->find("/html/body/form"); $firstPart = $fURL[0]->action . '<br>'; foreach ($nodes as $node) { $val = $node->value; $name = $node->name; $name . '<br />'; $val . "<br />"; $str1 = $str1. "&" . $name . "=" . $val; } $fixStr1 = str_replace('&College', '?College', $str1); $fixStr2 = str_replace('Fall 2012', 'Fall+2012', $fixStr1); $fixStr3 = str_replace('Class Subject', 'Class+Subject', $fixStr2); $fixStr4 = $firstPart . $fixStr3; echo $nextPageURL = chop($fixStr4); get_links($nextPageURL); } ?>

    Read the article

  • Webalizer causing high CPU load

    - by Tom
    We use webalizer to generate reports on our Apache access logs - it is useful in conjunction with Google Analytics. The problem is that webalizer uses ALOT of CPU when running. If I run top I can see two perl processes with 90% CPU - this slows down the machine and therefore the website for our users. Webalizer is run via a daily cron job (/etc/cron.daily/00webalizer): #! /bin/bash # update access statistics for the web site if [ -s /var/log/httpd/access_log ]; then exec /usr/bin/webalizer -Q fi Does anyone know how to limit how much CPU webalizer can use? For example, would nice help and how would I use it?

    Read the article

  • AppCmd returns error: Object 'SET' is not supported

    - by RHPT
    I am trying to set SSL Host Headers and Secure Site Bindings in IIS7. I followed the directions on this website http://www.digicert.com/ssl-support/ssl-host-headers-iis-7.htm (among others), but when I run the appcmd command mentioned, I get the error "Object 'SET' is not supported. Run 'appcmd.exe /?' to display supported objects". I have also tryed "appcmd site set" but it still returns the same error. What am I doing wrong? The server I am working on is Windows 2008 R2 x64, if that matters. Thank you.

    Read the article

  • extreme slowness with a remote database in Drupal

    - by ceejayoz
    We're attempting to scale our Drupal installations up and have decided on some dedicated MySQL boxes. Unfortunately, we're running into extreme slowness when we attempt to use the remote DB - page load times go from ~200 milliseconds to 5-10 seconds. Latency between the servers is minimal - a tenth or two of a millisecond. PING 10.37.66.175 (10.37.66.175) 56(84) bytes of data. 64 bytes from 10.37.66.175: icmp_seq=1 ttl=64 time=0.145 ms 64 bytes from 10.37.66.175: icmp_seq=2 ttl=64 time=0.157 ms 64 bytes from 10.37.66.175: icmp_seq=3 ttl=64 time=0.157 ms 64 bytes from 10.37.66.175: icmp_seq=4 ttl=64 time=0.144 ms 64 bytes from 10.37.66.175: icmp_seq=5 ttl=64 time=0.121 ms 64 bytes from 10.37.66.175: icmp_seq=6 ttl=64 time=0.122 ms 64 bytes from 10.37.66.175: icmp_seq=7 ttl=64 time=0.163 ms 64 bytes from 10.37.66.175: icmp_seq=8 ttl=64 time=0.115 ms 64 bytes from 10.37.66.175: icmp_seq=9 ttl=64 time=0.484 ms 64 bytes from 10.37.66.175: icmp_seq=10 ttl=64 time=0.156 ms --- 10.37.66.175 ping statistics --- 10 packets transmitted, 10 received, 0% packet loss, time 8998ms rtt min/avg/max/mdev = 0.115/0.176/0.484/0.104 ms Drupal's devel.module timers show the database queries aren't running any slower on the remote DB - about 150 microseconds whether it's the local or the remote server. Profiling with XHProf shows PHP execution times that aren't out of whack, either. Number of queries doesn't seem to make a difference - we seem the same 5-10 second delay whether a page has 12 queries or 250. Any suggestions about where I should start troubleshooting here? I'm quite confused.

    Read the article

  • postgreSQL vs Cassandra vs MongoDB vs Voldemart ?

    - by ramonrails
    Which database to decide upon? Any comparisions? Existing: postgresql Issues Not easily scalable horizontal. Needs sharding etc Clustering does not solve the data growth problem Looking for: Any database that is easily horizontally scalable Cassandra (Twitter uses that?) MongoDB (rapidly gaining popularity) Voldemart Other? Why? Data growing with snowball effect existing postgresql locks table etc for vaccuum tasks periodically Archiving data is tideous currently Human interaction involved in existing archive, vaccuum, ... process periodically Need a 'set it. forget it. just add another server when data grows more.' type of solution

    Read the article

  • Faceted search with Solr on Windows

    - by Dr.NETjes
    With over 10 million hits a day, funda.nl is probably the largest ASP.NET website which uses Solr on a Windows platform. While all our data (i.e. real estate properties) is stored in SQL Server, we're using Solr 1.4.1 to return the faceted search results as fast as we can.And yes, Solr is very fast. We did do some heavy stress testing on our Solr service, which allowed us to do over 1,000 req/sec on a single 64-bits Solr instance; and that's including converting search-url's to Solr http-queries and deserializing Solr's result-XML back to .NET objects! Let me tell you about faceted search and how to integrate Solr in a .NET/Windows environment. I'll bet it's easier than you think :-) What is faceted search? Faceted search is the clustering of search results into categories, allowing users to drill into search results. By showing the number of hits for each facet category, users can easily see how many results match that category. If you're still a bit confused, this example from CNET explains it all: The SQL solution for faceted search Our ("pre-Solr") solution for faceted search was done by adding a lot of redundant columns to our SQL tables and doing a COUNT(...) for each of those columns:   So if a user was searching for real estate properties in the city 'Amsterdam', our facet-query would be something like: SELECT COUNT(hasGarden), COUNT(has2Bathrooms), COUNT(has3Bathrooms), COUNT(etc...) FROM Houses WHERE city = 'Amsterdam' While this solution worked fine for a couple of years, it wasn't very easy for developers to add new facets. And also, performing COUNT's on all matched rows only performs well if you have a limited amount of rows in a table (i.e. less than a million). Enter Solr "Solr is an open source enterprise search server based on the Lucene Java search library, with XML/HTTP and JSON APIs, hit highlighting, faceted search, caching, replication, and a web administration interface." (quoted from Wikipedia's page on Solr) Solr isn't a database, it's more like a big index. Every time you upload data to Solr, it will analyze the data and create an inverted index from it (like the index-pages of a book). This way Solr can lookup data very quickly. To explain the inner workings of Solr is beyond the scope of this post, but if you want to learn more, please visit the Solr Wiki pages. Getting faceted search results from Solr is very easy; first let me show you how to send a http-query to Solr:    http://localhost:8983/solr/select?q=city:Amsterdam This will return an XML document containing the search results (in this example only three houses in the city of Amsterdam):    <response>     <result name="response" numFound="3" start="0">         <doc>            <long name="id">3203</long>            <str name="city">Amsterdam</str>            <str name="steet">Keizersgracht</str>            <int name="numberOfBathrooms">2</int>        </doc>         <doc>             <long name="id">3205</long>             <str name="city">Amsterdam</str>             <str name="steet">Vondelstraat</str>             <int name="numberOfBathrooms">3</int>          </doc>          <doc>             <long name="id">4293</long>             <str name="city">Amsterdam</str>             <str name="steet">Wibautstraat</str>             <int name="numberOfBathrooms">2</int>          </doc>       </result>   </response> By adding a facet-querypart for the field "numberOfBathrooms", Solr will return the facets for this particular field. We will see that there's one house in Amsterdam with three bathrooms and two houses with two bathrooms.    http://localhost:8983/solr/select?q=city:Amsterdam&facet=true&facet.field=numberOfBathrooms The complete XML response from Solr now looks like:    <response>      <result name="response" numFound="3" start="0">         <doc>            <long name="id">3203</long>            <str name="city">Amsterdam</str>            <str name="steet">Keizersgracht</str>            <int name="numberOfBathrooms">2</int>         </doc>         <doc>            <long name="id">3205</long>            <str name="city">Amsterdam</str>            <str name="steet">Vondelstraat</str>            <int name="numberOfBathrooms">3</int>         </doc>         <doc>            <long name="id">4293</long>            <str name="city">Amsterdam</str>            <str name="steet">Wibautstraat</str>            <int name="numberOfBathrooms">2</int>         </doc>      </result>      <lst name="facet_fields">         <lst name="numberOfBathrooms">            <int name="2">2</int>            <int name="3">1</int>         </lst>      </lst>   </response> Trying Solr for yourself To run Solr on your local machine and experiment with it, you should read the Solr tutorial. This tutorial really only takes 1 hour, in which you install Solr, upload sample data and get some query results. And yes, it works on Windows without a problem. Note that in the Solr tutorial, you're using Jetty as a Java Servlet Container (that's why you must start it using "java -jar start.jar"). In our environment we prefer to use Apache Tomcat to host Solr, which installs like a Windows service and works more like .NET developers expect. See the SolrTomcat page.Some best practices for running Solr on Windows: Use the 64-bits version of Tomcat. In our tests, this doubled the req/sec we were able to handle!Use a .NET XmlReader to convert Solr's XML output-stream to .NET objects. Don't use XPath; it won't scale well.Use filter queries ("fq" parameter) instead of the normal "q" parameter where possible. Filter queries are cached by Solr and will speed up Solr's response time (see FilterQueryGuidance)In my next post I’ll talk about how to keep Solr's indexed data in sync with the data in your SQL tables. Timestamps / rowversions will help you out here!

    Read the article

  • High apache load but little traffic logged

    - by nrambeck
    I recently installed Varnish to sit in front of Apache on a dedicated server running a single site. It appears to be working well, but the load on Apache is still very high. What doesn't make sense is that the Apache access log shows almost no traffic getting past Varnish. When I tail the apache log I see about 1-3 hits per second come through. Here is what the load on Apache looks like : USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND apache 13834 8.1 1.0 107716 34164 ? S 08:24 0:02 /usr/sbin/httpd apache 13835 8.1 1.0 107716 33856 ? S 08:24 0:02 /usr/sbin/httpd apache 11483 7.9 0.9 105916 30788 ? S 08:23 0:06 /usr/sbin/httpd apache 12255 7.5 1.0 107476 33312 ? S 08:24 0:04 /usr/sbin/httpd apache 9340 7.2 1.1 107732 34916 ? R 08:23 0:09 /usr/sbin/httpd apache 12029 6.8 0.9 106908 30416 ? S 08:24 0:04 /usr/sbin/httpd apache 11577 6.7 1.0 107192 34180 ? S 08:24 0:05 /usr/sbin/httpd apache 11486 6.6 1.0 106176 33112 ? S 08:23 0:05 /usr/sbin/httpd apache 11796 6.4 1.0 106936 31916 ? S 08:24 0:04 /usr/sbin/httpd apache 13815 6.3 1.0 107988 34464 ? S 08:24 0:02 /usr/sbin/httpd apache 18089 6.3 1.3 107444 43212 ? S 08:11 0:52 /usr/sbin/httpd apache 11797 5.9 1.0 107716 34580 ? S 08:24 0:04 /usr/sbin/httpd apache 7655 5.9 0.0 0 0 ? Z 08:22 0:09 [httpd] <defunct> mysql 8033 5.9 6.2 318240 199512 ? Sl May14 352:34 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/va apache 11488 5.8 0.9 106924 31632 ? S 08:23 0:04 /usr/sbin/httpd apache 9375 5.7 1.1 106956 35552 ? S 08:23 0:07 /usr/sbin/httpd apache 3551 5.6 1.1 106956 36140 ? S 08:20 0:14 /usr/sbin/httpd apache 7657 5.6 1.0 106968 32472 ? S 08:22 0:09 /usr/sbin/httpd apache 11433 5.6 1.0 107716 34396 ? S 08:23 0:04 /usr/sbin/httpd apache 5505 5.5 1.1 106944 34924 ? S 08:21 0:12 /usr/sbin/httpd apache 7172 5.3 1.1 106972 35368 ? S 08:22 0:09 /usr/sbin/httpd apache 10088 5.2 0.9 106160 31240 ? S 08:23 0:04 /usr/sbin/httpd apache 7656 5.1 1.0 106436 34388 ? S 08:22 0:08 /usr/sbin/httpd apache 3468 5.0 1.1 107716 35968 ? S 08:20 0:13 /usr/sbin/httpd apache 14242 4.8 1.0 107728 33032 ? S 08:25 0:00 /usr/sbin/httpd apache 3578 4.8 1.1 107988 35964 ? S 08:20 0:12 /usr/sbin/httpd apache 28192 4.8 1.2 106944 38060 ? S 08:17 0:23 /usr/sbin/httpd apache 3277 4.6 1.1 106956 35688 ? S 08:20 0:13 /usr/sbin/httpd apache 15434 3.7 0.7 106908 24684 ? S 08:25 0:00 /usr/sbin/httpd There is a default apache log and then one other VirtualHost log setup. I'm concerned that Apache is handling some kind of traffic that is not being logged. Is that possible? And is there anything I can do to capture that traffic?

    Read the article

  • Best available technology for layered disk cache in linux

    - by SpliFF
    I've just bought a 6-core Phenom with 16G of RAM. I use it primarily for compiling and video encoding (and occassional web/db). I'm finding all activities get disk-bound and I just can't keep all 6 cores fed. I'm buying an SSD raid to sit between the HDD and tmpfs. I want to setup a "layered" filesystem where reads are cached on tmpfs but writes safely go through to the SSD. I want files (or blocks) that haven't been read lately on the SSD to then be written back to a HDD using a compressed FS or block layer. So basically reads: - Check tmpfs - Check SSD - Check HD And writes: - Straight to SSD (for safety), then tmpfs (for speed) And periodically, or when space gets low: - Move least frequently accessed files down one layer. I've seen a few projects of interest. CacheFS, cachefsd, bcache seem pretty close but I'm having trouble determining which are practical. bcache seems a little risky (early adoption), cachefs seems tied to specific network filesystems. There are "union" projects unionfs and aufs that let you mount filesystems over each other (USB device over a DVD usually) but both are distributed as a patch and I get the impression this sort of "transparent" mounting was going to become a kernel feature rather than a FS. I know the kernel has a built-in disk cache but it doesn't seem to work well with compiling. I see a 20x speed improvement when I move my source files to tmpfs. I think it's because the standard buffers are dedicated to a specific process and compiling creates and destroys thousands of processes during a build (just guessing there). It looks like I really want those files precached. I've read tmpfs can use virtual memory. In that case is it practical to create a giant tmpfs with swap on the SSD? I don't need to boot off the resulting layered filesystem. I can load grub, kernel and initrd from elsewhere if needed. So that's the background. The question has several components I guess: Recommended FS and/or block layer for the SSD and compressed HDD. Recommended mkfs parameters (block size, options etc...) Recommended cache/mount technology to bind the layers transparently Required mount parameters Required kernel options / patches, etc..

    Read the article

  • When tab groups are loaded, Firefox becomes unresponsible for minutes (Unresponsive script)

    - by unor
    I have several tab groups (~ 20) in Firefox. I can start the browser without any problems. However, as soon as I … click at the "Group tabs" icon in the toolbar, or right-click on a tab and hover over "Move to tab group", … Firefox becomes unresponsible/freezes for a rather long time (more than 2 minutes). It seems to load all tab groups (it doesn't load all the pages! I deactivated this in the settings). While this is happening, I get several "Unresponsive script" warnings, like: Script: chrome://global/content/bindings/tabbox.xml:0 (most of the time) Script: chrome://global/content/bindings/tabbox.xml:418 Script: chrome://browser/content/tabview.js:400 Script: chrome://browser/content/tabview.js:522 Script: resource://modules/sessionstore/SessionStore.jsm:3578 Script: resource:///components/PageThumbsProtocol.js:79 (rare) Script: resource://gre/modules/XPCOMUtils.jsm:323 (rare) (probably also other warnings, didn't record them yet, though) On all of these I click "Continue". After ~ 2-3 minutes and 3-5 warnings, I can use Firefox again. Now I can switch tab groups without any problems. Why is this happening? How can I prevent the long loading time? Is there maybe a about:config setting I could try? I started Firefox in Safe Mode (= without any add-ons): the problem still exists.

    Read the article

  • SQL 2008 Database tuning advisor won’t start

    - by Andrew Hancox
    For some reason I can't get DTA to connect to my development machine. It connects to a remote DB just fine but when I point it to my dev machine I get an error saying: Failed to initialize MSDB database for tuning (exit code: -1073741819). I'm pretty sure it's not a permissions issue since I've used profiler to capture what it's doing and all of the commands it's run so far look fine and are being run under my account which is associated with the sysadmin role, when I run them in sql management studio they go through fine. I'm pretty convinced that the problem is related to creating the objects in MSDB that are used by DTA but I tried creating these manually (I found scripts on the web) and it just seems to push the problem along the line slightly. I'm going out of my mind - have even tried reinstalling SQL but that's not fixed it. I'm using SQL 2008 with SP1 (10.0.2531) on windows server 2008 (patched up to date). SAVE ME!!!!!

    Read the article

  • PostgreSQL lots of writes

    - by strife911
    Hi, I am using postgreSQL for a scientific application (unsupervised clustering). The python program is multi-threaded so that each thread manages its own postmaster process (one per core). Hence, their is a lot of concurrency. Each thread-process loop infinitely though two SQL queries. The first is for reading, the second is for writing. The read operation considers 500 time the amount of rows the write operation considers. Here is the output of dstat: ----total-cpu-usage---- ------memory-usage----- -dsk/total- --paging-- --io/total- usr sys idl wai hiq siq| used buff cach free| read writ| in out | read writ 4 0 32 64 0 0|3599M 63M 57G 1893M|1524k 16M| 0 0 | 98 2046 1 0 35 64 0 0|3599M 63M 57G 1892M|1204k 17M| 0 0 | 68 2062 2 0 32 66 0 0|3599M 63M 57G 1890M|1132k 17M| 0 0 | 62 2033 2 1 32 65 0 0|3599M 63M 57G 1904M|1236k 18M| 0 0 | 80 1994 2 0 31 67 0 0|3599M 63M 57G 1903M|1312k 16M| 0 0 | 70 1900 2 0 37 60 0 0|3599M 63M 57G 1899M|1116k 15M| 0 0 | 71 1594 2 1 37 60 0 0|3599M 63M 57G 1898M| 448k 17M| 0 0 | 39 2001 2 0 25 72 0 0|3599M 63M 57G 1896M|1192k 17M| 0 0 | 78 1946 1 0 40 58 0 0|3599M 63M 57G 1895M| 432k 15M| 0 0 | 38 1937 I am pretty sure I could write more often than that for I have seen it write up to 110-140M on dstat. How can I optimize this process?

    Read the article

  • virtual directory make file copy operation extremely slow on UNC Path (IIS 7.5 bug?)

    - by user144737
    When i create a website/virtual directory pointing to UNC path, its make our file copy extremely slow on the UNC path. 6 seconds for file copy (~13 M) on the UNC path without any virtual directory/website pointing to it. over 1 mins. for file copy (same files ~13M) on the same UNC path with virtual directory/website pointing to it. All file copy operation run on web server side. Our setting as below: Web server - Windows Server standard R2 2008 / IIS 7.5 File server - Windows Server standard 2003 I have tested this case on 3 servers (Windows Server standard R2 2008 / IIS 7.5) and got same result. I also tested this case on 2 windows 2003 / IIS 6, it won't slow down the file copy. Is it IIS 7.5 bug? any patch/hotfix to solve this case? Thank you. Gordon

    Read the article

  • System Monitoring service - Hosted

    - by sevitzdotcom
    I'm looking for a system monitoring service, a bit like New Relic, but for more the system itself than the ruby side of things. i.e. something like Zabbix, but hosted like New Relic. I wont something I can just drop an 'agent' on the servers, and then do all the config and monitoring and notifications on a nice slick 3rd party system. So essential Zabbix Meats New Relic meets Pingdom. Any ideas?

    Read the article

  • Automating Solaris 11 Zones Installation Using The Automated Install Server

    - by Orgad Kimchi
    Introduction How to use the Oracle Solaris 11 Automated install server in order to automate the Solaris 11 Zones installation. In this document I will demonstrate how to setup the Automated Install server in order to provide hands off installation process for the Global Zone and two Non Global Zones located on the same system. Architecture layout: Figure 1. Architecture layout Prerequisite Setup the Automated install server (AI) using the following instructions “How to Set Up Automated Installation Services for Oracle Solaris 11” The first step in this setup will be creating two Solaris 11 Zones configuration files. Step 1: Create the Solaris 11 Zones configuration files  The Solaris Zones configuration files should be in the format of the zonecfg export command. # zonecfg -z zone1 export > /var/tmp/zone1# cat /var/tmp/zone1 create -b set brand=solaris set zonepath=/rpool/zones/zone1 set autoboot=true set ip-type=exclusive add anet set linkname=net0 set lower-link=auto set configure-allowed-address=true set link-protection=mac-nospoof set mac-address=random end  Create a backup copy of this file under a different name, for example, zone2. # cp /var/tmp/zone1 /var/tmp/zone2 Modify the second configuration file with the zone2 configuration information You should change the zonepath for example: set zonepath=/rpool/zones/zone2 Step2: Copy and share the Zones configuration files  Create the NFS directory for the Zones configuration files # mkdir /export/zone_config Share the directory for the Zones configuration file # share –o ro /export/zone_config Copy the Zones configuration files into the NFS shared directory # cp /var/tmp/zone1 /var/tmp/zone2  /export/zone_config Verify that the NFS share has been created using the following command # share export_zone_config      /export/zone_config     nfs     sec=sys,ro Step 3: Add the Global Zone as client to the Install Service Use the installadm create-client command to associate client (Global Zone) with the install service To find the MAC address of a system, use the dladm command as described in the dladm(1M) man page. The following command adds the client (Global Zone) with MAC address 0:14:4f:2:a:19 to the s11x86service install service. # installadm create-client -e “0:14:4f:2:a:19" -n s11x86service You can verify the client creation using the following command # installadm list –c Service Name  Client Address     Arch   Image Path ------------  --------------     ----   ---------- s11x86service 00:14:4F:02:0A:19  i386   /export/auto_install/s11x86service We can see the client install service name (s11x86service), MAC address (00:14:4F:02:0A:19 and Architecture (i386). Step 4: Global Zone manifest setup  First, get a list of the installation services and the manifests associated with them: # installadm list -m Service Name   Manifest        Status ------------   --------        ------ default-i386   orig_default   Default s11x86service  orig_default   Default Then probe the s11x86service and the default manifest associated with it. The -m switch reflects the name of the manifest associated with a service. Since we want to capture that output into a file, we redirect the output of the command as follows: # installadm export -n s11x86service -m orig_default >  /var/tmp/orig_default.xml Create a backup copy of this file under a different name, for example, orig-default2.xml, and edit the copy. # cp /var/tmp/orig_default.xml /var/tmp/orig_default2.xml Use the configuration element in the AI manifest for the client system to specify non-global zones. Use the name attribute of the configuration element to specify the name of the zone. Use the source attribute to specify the location of the config file for the zone.The source location can be any http:// or file:// location that the client can access during installation. The following sample AI manifest specifies two Non-Global Zones: zone1 and zone2 You should replace the server_ip with the ip address of the NFS server. <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>   <ai_instance>     <target>       <logical>         <zpool name="rpool" is_root="true">           <filesystem name="export" mountpoint="/export"/>           <filesystem name="export/home"/>           <be name="solaris"/>         </zpool>       </logical>     </target>     <software type="IPS">       <source>         <publisher name="solaris">           <origin name="http://pkg.oracle.com/solaris/release"/>         </publisher>       </source>       <software_data action="install">         <name>pkg:/entire@latest</name>         <name>pkg:/group/system/solaris-large-server</name>       </software_data>     </software>     <configuration type="zone" name="zone1" source="file:///net/server_ip/export/zone_config/zone1"/>     <configuration type="zone" name="zone2" source="file:///net/server_ip/export/zone_config/zone2"/>   </ai_instance> </auto_install> The following example adds the /var/tmp/orig_default2.xml AI manifest to the s11x86service install service # installadm create-manifest -n s11x86service -f /var/tmp/orig_default2.xml -m gzmanifest You can verify the manifest creation using the following command # installadm list -n s11x86service  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    orig_default        Default  None    gzmanifest          Inactive None We can see from the command output that the new manifest named gzmanifest has been created and associated with the s11x86service install service. Step 5: Non Global Zone manifest setup The AI manifest for non-global zone installation is similar to the AI manifest for installing the global zone. If you do not provide a custom AI manifest for a non-global zone, the default AI manifest for Zones is used The default AI manifest for Zones is available at /usr/share/auto_install/manifest/zone_default.xml. In this example we should use the default AI manifest for zones The following sample default AI manifest for zones # cat /usr/share/auto_install/manifest/zone_default.xml <?xml version="1.0" encoding="UTF-8"?> <!--  Copyright (c) 2011, 2012, Oracle and/or its affiliates. All rights reserved. --> <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>     <ai_instance name="zone_default">         <target>             <logical>                 <zpool name="rpool">                     <!--                       Subsequent <filesystem> entries instruct an installer                       to create following ZFS datasets:                           <root_pool>/export         (mounted on /export)                           <root_pool>/export/home    (mounted on /export/home)                       Those datasets are part of standard environment                       and should be always created.                       In rare cases, if there is a need to deploy a zone                       without these datasets, either comment out or remove                       <filesystem> entries. In such scenario, it has to be also                       assured that in case of non-interactive post-install                       configuration, creation of initial user account is                       disabled in related system configuration profile.                       Otherwise the installed zone would fail to boot.                     -->                     <filesystem name="export" mountpoint="/export"/>                     <filesystem name="export/home"/>                     <be name="solaris">                         <options>                             <option name="compression" value="on"/>                         </options>                     </be>                 </zpool>             </logical>         </target>         <software type="IPS">             <destination>                              </destination>             <software_data action="install">                 <name>pkg:/group/system/solaris-small-server</name>             </software_data>         </software>     </ai_instance> </auto_install> (optional) We can customize the default AI manifest for Zones Create a backup copy of this file under a different name, for example, zone_default2.xml and edit the copy # cp /usr/share/auto_install/manifest/zone_default.xml /var/tmp/zone_default2.xml Edit the copy (/var/tmp/zone_default2.xml) The following example adds the /var/tmp/zone_default2.xml AI manifest to the s11x86service install service and specifies that zone1 and zone2 should use this manifest. # installadm create-manifest -n s11x86service -f /var/tmp/zone_default2.xml -m zones_manifest -c zonename="zone1 zone2" Note: Do not use the following elements or attributes in a non-global zone AI manifest:     The auto_reboot attribute of the ai_instance element     The http_proxy attribute of the ai_instance element     The disk child element of the target element     The noswap attribute of the logical element     The nodump attribute of the logical element     The configuration element Step 6: Global Zone profile setup We are going to create a global zone configuration profile which includes the host information for example: host name, ip address name services etc… # sysconfig create-profile –o /var/tmp/gz_profile.xml You need to provide the host information for example:     Default router     Root password     DNS information The output should eventually disappear and be replaced by the initial screen of the System Configuration Tool (see Figure 2), where you can do the final configuration. Figure 2. Profile creation menu You can validate the profile using the following command # installadm validate -n s11x86service –P /var/tmp/gz_profile.xml Validating static profile gz_profile.xml...  Passed Next, instantiate a profile with the install service. In our case, use the following syntax for doing this # installadm create-profile -n s11x86service  -f /var/tmp/gz_profile.xml -p  gz_profile You can verify profile creation using the following command # installadm list –n s11x86service  -p Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         None We can see that the gz_profie has been created and associated with the s11x86service Install service. Step 7: Setup the Solaris Zones configuration profiles The step should be similar to the Global zone profile creation on step 6 # sysconfig create-profile –o /var/tmp/zone1_profile.xml # sysconfig create-profile –o /var/tmp/zone2_profile.xml You can validate the profiles using the following command # installadm validate -n s11x86service -P /var/tmp/zone1_profile.xml Validating static profile zone1_profile.xml...  Passed # installadm validate -n s11x86service -P /var/tmp/zone2_profile.xml Validating static profile zone2_profile.xml...  Passed Next, associate the profiles with the install service The following example adds the zone1_profile.xml configuration profile to the s11x86service  install service and specifies that zone1 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone1_profile.xml -p zone1_profile -c zonename=zone1 The following example adds the zone2_profile.xml configuration profile to the s11x86service  install service and specifies that zone2 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone2_profile.xml -p zone2_profile -c zonename=zone2 You can verify the profiles creation using the following command # installadm list -n s11x86service -p Service/Profile Name  Criteria --------------------  -------- s11x86service    zone1_profile      zonename = zone1    zone2_profile      zonename = zone2    gz_profile         None We can see that we have three profiles in the s11x86service  install service     Global Zone  gz_profile     zone1            zone1_profile     zone2            zone2_profile. Step 8: Global Zone setup Associate the global zone client with the manifest and the profile that we create in the previous steps The following example adds the manifest and profile to the client (global zone), where: gzmanifest  is the name of the manifest. gz_profile  is the name of the configuration profile. mac="0:14:4f:2:a:19" is the client (global zone) mac address s11x86service is the install service name. # installadm set-criteria -m  gzmanifest  –p  gz_profile  -c mac="0:14:4f:2:a:19" -n s11x86service You can verify the manifest and profile association using the following command # installadm list -n s11x86service -p  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    gzmanifest                   mac  = 00:14:4F:02:0A:19    orig_default        Default  None Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         mac      = 00:14:4F:02:0A:19    zone2_profile      zonename = zone2    zone1_profile      zonename = zone1 Step 9: Provision the host with the Non-Global Zones The next step is to boot the client system off the network and provision it using the Automated Install service that we just set up. First, boot the client system. Figure 3 shows the network boot attempt (when done on an x86 system): Figure 3. Network Boot Then you will be prompted by a GRUB menu, with a timer, as shown in Figure 4. The default selection (the "Text Installer and command line" option) is highlighted.  Press the down arrow to highlight the second option labeled Automated Install, and then press Enter. The reason we need to do this is because we want to prevent a system from being automatically re-installed if it were to be booted from the network accidentally. Figure 4. GRUB Menu What follows is the continuation of a networked boot from the Automated Install server,. The client downloads a mini-root (a small set of files in which to successfully run the installer), identifies the location of the Automated Install manifest on the network, retrieves that manifest, and then processes it to identify the address of the IPS repository from which to obtain the desired software payload. Non-Global Zones are installed and configured on the first reboot after the Global Zone is installed. You can list all the Solaris Zones status using the following command # zoneadm list -civ Once the Zones are in running state you can login into the Zone using the following command # zlogin –z zone1 Troubleshooting Automated Installations If an installation to a client system failed, you can find the client log at /system/volatile/install_log. NOTE: Zones are not installed if any of the following errors occurs:     A zone config file is not syntactically correct.     A collision exists among zone names, zone paths, or delegated ZFS datasets in the set of zones to be installed     Required datasets are not configured in the global zone. For more troubleshooting information see “Installing Oracle Solaris 11 Systems” Conclusion This paper demonstrated the benefits of using the Automated Install server to simplify the Non Global Zones setup, including the creation and configuration of the global zone manifest and the Solaris Zones profiles.

    Read the article

  • Perfmon - Redirector current command 0 result

    - by Dave
    I'd like to monitor how many SMB connections there are at any given time for my 2008R2 file server, but when I add Redirector/Current Commands in perfmon, I get 0 results. This KB from Microsoft isn't exactly helpful either: http://support.microsoft.com/kb/2523382 It mearly confirms there is an issue, but doesn't provide a work around. How would I go about getting the current number of SMB connections? Thanks for your help in advance.

    Read the article

  • GlusterFS vs Ceph, which is better for production use for the moment?

    - by Mickey Shine
    I am evaluating GlusterFS and Ceph, seems Gluster is FUSE based which means it may be not as fast as Ceph. But looks like Gluster got a very friendly control panel and is ease to use. Ceph was merged into linux kernel a few days ago and this indicates that it has much more potential energy and may be a good choice in the future. I am wondering which(even out of the two?) is a better choice for production use? It would be nice if you could share your practical experiences

    Read the article

< Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >