Search Results

Search found 25754 results on 1031 pages for 'serial number'.

Page 175/1031 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • Special scheduling Algorithm (pattern expansion)

    - by tovare
    Question Do you think genetic algorithms worth trying out for the problem below, or will I hit local-minima issues? I think maybe aspects of the problem is great for a generator / fitness-function style setup. (If you've botched a similar project I would love hear from you, and not do something similar) Thank you for any tips on how to structure things and nail this right. The problem I'm searching a good scheduling algorithm to use for the following real-world problem. I have a sequence with 15 slots like this (The digits may vary from 0 to 20) : 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 (And there are in total 10 different sequences of this type) Each sequence needs to expand into an array, where each slot can take 1 position. 1 1 0 0 1 1 1 0 0 0 1 1 1 0 0 1 1 0 0 1 1 1 0 0 0 1 1 1 0 0 0 0 1 1 0 0 0 1 1 1 0 0 0 1 1 0 0 1 1 0 0 0 1 1 1 0 0 0 1 1 The constraints on the matrix is that: [row-wise, i.e. horizontally] The number of ones placed, must either be 11 or 111 [row-wise] The distance between two sequences of 1 needs to be a minimum of 00 The sum of each column should match the original array. The number of rows in the matrix should be optimized. The array then needs to allocate one of 4 different matrixes, which may have different number of rows: A, B, C, D A, B, C and D are real-world departments. The load needs to be placed reasonably fair during the course of a 10-day period, not to interfere with other department goals. Each of the matrix is compared with expansion of 10 different original sequences so you have: A1, A2, A3, A4, A5, A6, A7, A8, A9, A10 B1, B2, B3, B4, B5, B6, B7, B8, B9, B10 C1, C2, C3, C4, C5, C6, C7, C8, C9, C10 D1, D2, D3, D4, D5, D6, D7, D8, D9, D10 Certain spots on these may be reserved (Not sure if I should make it just reserved/not reserved or function-based). The reserved spots might be meetings and other events The sum of each row (for instance all the A's) should be approximately the same within 2%. i.e. sum(A1 through A10) should be approximately the same as (B1 through B10) etc. The number of rows can vary, so you have for instance: A1: 5 rows A2: 5 rows A3: 1 row, where that single row could for instance be: 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 etc.. Sub problem* I'de be very happy to solve only part of the problem. For instance being able to input: 1 1 2 3 4 2 2 3 4 2 2 3 3 2 3 And get an appropriate array of sequences with 1's and 0's minimized on the number of rows following th constraints above.

    Read the article

  • Python: Script works, but seems to deadlock after some time

    - by sberry2A
    I have the following script, which is working for the most part Link to PasteBin The script's job is to start a number of threads which in turn each start a subprocess with Popen. The output from each subprocess is as follows: 1 2 3 . . . n Done Bascially the subprocess is transferring 10M records from tables in one database to different tables in another db with a lot of data massaging/manipulation in between because of the different schemas. If the subprocess fails at any time in it's execution (bad records, duplicate primary keys, etc), or it completes successfully, it will output "Done\n". If there are no more records to select against for transfer then it will output "NO DATA\n" My intent was to create my script "tableTransfer.py" which would spawn a number of these processes, read their output, and in turn output information such as number of updates completed, time remaining, time elapsed, and number of transfers per second. I started running the process last night and checked in this morning to see it had deadlocked. There were not subprocceses running, there are still records to be updated, and the script had not exited. It was simply sitting there, no longer outputting the current information because no subprocces were running to update the total number complete which is what controls updates to the output. This is running on OS X. I am looking for three things: I would like to get rid of the possibility of this deadlock occurring so I don't need to check in on it as frequently. Is there some issue with locking? Am I doing this in a bad way (gThreading variable to control looping of spawning additional thread... etc.) I would appreciate some suggestions for improving my overall methodology. How should I handle ctrl-c exit? Right now I need to kill the process, but assume I should be able to use the signal module or other to catch the signal and kill the threads, is that right? I am not sure whether I should be pasting my entire script here, since I usually just paste snippets. Let me know if I should paste it here as well.

    Read the article

  • How to obtain the first cluster of the directory's data in FAT using C# (or at least C++) and Win32A

    - by DarkWalker
    So I have a FAT drive, lets say H: and a directory 'work' (full path 'H:\work'). I need to get the NUMBER of the first cluster of that directory. The number of the first cluster is 2-bytes value, that is stored in the 26th and 27th bytes of the folder enty (wich is 32 bytes). Lets say I am doing it with file, NOT a directory. I can use code like this: static public string GetDirectoryPtr(string dir) { IntPtr ptr = CreateFile(@"H:\Work\dover.docx", GENERIC_READ, FILE_SHARE_READ | FILE_SHARE_WRITE, IntPtr.Zero, OPEN_EXISTING, 0,//FILE_FLAG_BACKUP_SEMANTICS, IntPtr.Zero); try { const uint bytesToRead = 2; byte[] readbuffer = new byte[bytesToRead]; if (ptr.ToInt32() == -1) return String.Format("Error: cannot open direcotory {0}", dir); if (SetFilePointer(ptr, 26, 0, 0) == -1) return String.Format("Error: unable to set file pointer on file {0}", ptr); uint read = 0; // real count of read bytes if (!ReadFile(ptr, readbuffer, bytesToRead, out read, 0)) return String.Format("cant read from file {0}. Error #{1}", ptr, Marshal.GetLastWin32Error()); int result = readbuffer[0] + 16 * 16 * readbuffer[1]; return result.ToString();//ASCIIEncoding.ASCII.GetString(readbuffer); } finally { CloseHandle(ptr); } } And it will return some number, like 19 (quite real to me, this is the only file on the disk). But I DONT need a file, I need a folder. So I am puttin FILE_FLAG_BACKUP_SEMANTICS param for CreateFile call... and dont know what to do next =) msdn is very clear on this issue http://msdn.microsoft.com/en-us/library/aa365258(v=VS.85).aspx It sounds to me like: "There is no way you can get a number of the folder's first cluster". The most desperate thing is that my tutor said smth like "You are going to obtain this or you wont pass this course". The true reason why he is so sure this is possible is because for 10 years (or may be more) he recieved the folder's first cluster number as a HASH of the folder's addres (and I was stupid enough to point this to him, so now I cant do it the same way) PS: This is the most spupid task I have ever had!!! This value is not really used anythere in program, it is only fcking pointless integer.

    Read the article

  • difference between calling javascript function on body load or directly from script.

    - by Abbas
    i am using a javascript where in i am creating multiple div (say 5) at runtime, using javascript function, all the divs contain some text, which is again set at runtime, now i want to disable all the divs at runtime and have the page numbers in the bottom, so that whenever user clicks on the page number only that div should get visible else other should get disable, i have created a function, which accepts parameter, as page number, i enable the div whose page number is clicked and using a for loop, i disable all the other divs, now here my problem is i have created two functions, 1st (for adding divs and disabling all the divs except 1st) and writing content to it, and other for enabling the div whose page number is clicked, and i have called the Adding div function on body onload; now first time when i run, page everthing goes well, but next time when i click on any of the page number, it just gets enabled and again that AddDiv function, runs and re-enables all the divs.. Please reply why this is happening and how should i resolve my issue... Below is my script, content for the div are coming using Json. <body onload="JsonScript();"> <script language="javascript" type="text/javascript"> function JsonScript() { var existingDiv = document.getElementById("form1"); var newAnchorDiv = document.createElement("div"); newAnchorDiv.id = "anchorDiv"; var list = { "Article": articleList }; for(var i=0; i < list.Article.length; i++) { var newDiv = document.createElement("div"); newDiv.id = "div"+(i+1); newDiv.innerHTML = list.Article[i].toString(); newAnchorDiv.innerHTML += "<a href='' onclick='displayMessage("+(i+1)+")'>"+(i+1)+"</a>&nbsp;"; existingDiv.appendChild(newDiv); existingDiv.appendChild(newAnchorDiv); } for(var j = 2; j < list.Article.length + 1; j ++) { var getDivs = document.getElementById("div"+j); getDivs.style.display = "none"; } } function displayMessage(currentId) { var list = {"Article" : articleList} document.getElementById("div"+currentId).style.display = 'block'; for(var i = 1; i < list.Article.length + 1; i++) { if (i != currentId) { document.getElementById("div"+i).style.display = 'none'; } } } </script> Thanks and Regards

    Read the article

  • Java: Combination of recursive loops which has different FOR loop inside; Output: FOR loops indexes

    - by vvinjj
    currently recursion is fresh & difficult topic for me, however I need to use it in one of my algorithms. Here is the challenge: I need a method where I specify number of recursions (number of nested FOR loops) and number of iterations for each FOR loop. The result should show me, something simmilar to counter, however each column of counter is limited to specific number. ArrayList<Integer> specs= new ArrayList<Integer>(); specs.add(5); //for(int i=0 to 5; i++) specs.add(7); specs.add(9); specs.add(2); specs.add(8); specs.add(9); public void recursion(ArrayList<Integer> specs){ //number of nested loops will be equal to: specs.size(); //each item in specs, specifies the For loop max count e.g: //First outside loop will be: for(int i=0; i< specs.get(0); i++) //Second loop inside will be: for(int i=0; i< specs.get(1); i++) //... } The the results will be similar to outputs of this manual, nested loop: int[] i; i = new int[7]; for( i[6]=0; i[6]<5; i[6]++){ for( i[5]=0; i[5]<7; i[5]++){ for(i[4] =0; i[4]<9; i[4]++){ for(i[3] =0; i[3]<2; i[3]++){ for(i[2] =0; i[2]<8; i[2]++){ for(i[1] =0; i[1]<9; i[1]++){ //... System.out.println(i[1]+" "+i[2]+" "+i[3]+" "+i[4]+" "+i[5]+" "+i[6]); } } } } } } I already, killed 3 days on this, and still no results, was searching it in internet, however the examples are too different. Therefore, posting the programming question in internet first time in my life. Thank you in advance, you are free to change the code efficiency, I just need the same results.

    Read the article

  • QTreeView incorrectly displays the SpinBox if item is checkable and when using QWindowsStyle

    - by Sharraz
    Hello, I'm having a problem with a QTreeView in my program: The SpinBox used to edit the double value of a checkable item is displayed incorrectly when using the Windows style. Only the up and down buttons of the SpinBox can be seen, but not any value. The following example code is able to reproduce the problem: #include <QtGui> class Model : public QAbstractItemModel { public: Model() : checked(false), number(0) {} Qt::ItemFlags flags(const QModelIndex & index) const { return Qt::ItemIsEnabled | Qt::ItemIsEditable | Qt::ItemIsSelectable | Qt::ItemIsUserCheckable; } QVariant data(const QModelIndex &index, int role) const { switch (role) { case Qt::DisplayRole: case Qt::EditRole: return QVariant(number); case Qt::CheckStateRole: return QVariant(checked ? Qt::Checked : Qt::Unchecked); } return QVariant(); } QVariant headerData(int section, Qt::Orientation orientation, int role) const { return QVariant(); } int rowCount(const QModelIndex &parent) const { return parent.isValid() ? 0 : 1; } int columnCount(const QModelIndex &parent) const { return parent.isValid() ? 0 : 1; } bool setData(const QModelIndex &index, const QVariant &value, int role) { switch (role) { case Qt::EditRole: number = value.toDouble(); emit dataChanged(index, index); return true; case Qt::CheckStateRole: checked = value.toInt(); emit dataChanged(index, index); return true; } return false; } QModelIndex index(int row, int column, const QModelIndex &parent) const { if (!row && !column && !parent.isValid()) return createIndex(0, 0); return QModelIndex(); } QModelIndex parent(const QModelIndex &child) const { return QModelIndex(); } private: bool checked; double number; }; int main(int argc, char *argv[]) { QApplication app(argc, argv); QApplication::setStyle(new QWindowsStyle()); QTreeView tree; tree.setModel(new Model()); tree.show(); return app.exec(); } The problems seems to have something to do with the checkbox. If Qt::ItemIsUserCheckable is removed, the SpinBox will be displayed correctly. If the number is replaced by a longer one like 0.01, it can be seen partially. Any idea how this problem can be solved? Do I use the checkbox correctly? Greets, Sharraz

    Read the article

  • Searching for duplicate records within a text file where the duplicate is determined by only two fie

    - by plg
    First, Python Newbie; be patient/kind. Next, once a month I receive a large text file (think 7 Million records) to test for duplicate values. This is catalog information. I get 7 fields, but the two I'm interested in are a supplier code and a full orderable part number. To determine if the record is dupliacted, I compress all special characters from the part number (except . and #) and create a compressed part number. The test for duplicates becomes the supplier code and compressed part number combination. This part is fairly straight forward. Currently, I am just copying the original file with 2 new columns (compressed part and duplicate indicator). If the part is a duplicate, I put a "YES" in the last field. Now that this is done, I want to be able to go back (or better yet, at the same time) to get the previous record where there was a supplier code/compressed part number match. So far, my code looks like this: Compress Full Part to a Compressed Part and Check for Duplicates on Supplier Code and Compressed Part combination import sys import re import time ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ start=time.time() try: file1 = open("C:\Accounting\May Accounting\May.txt", "r") except IOError: print sys.stderr, "Cannot Open Read File" sys.exit(1) try: file2 = open(file1.name[0:len(file1.name)-4] + "_" + "COMPRESSPN.txt", "a") except IOError: print sys.stderr, "Cannot Open Write File" sys.exit(1) hdrList="CIGSUPPLIER|FULL_PART|PART_STATUS|ALIAS_FLAG|ACQUISITION_FLAG|COMPRESSED_PART|DUPLICATE_INDICATOR" file2.write(hdrList+chr(10)) lines_seen=set() affirm="YES" records = file1.readlines() for record in records: fields = record.split(chr(124)) if fields[0]=="CIGSupplier": continue #If incoming file has a header line, skip it file2.write(fields[0]+"|"), #Supplier Code file2.write(fields[1]+"|"), #Full_Part file2.write(fields[2]+"|"), #Part Status file2.write(fields[3]+"|"), #Alias Flag file2.write(re.sub("[$\r\n]", "", fields[4])+"|"), #Acquisition Flag file2.write(re.sub("[^0-9a-zA-Z.#]", "", fields[1])+"|"), #Compressed_Part dupechk=fields[0]+"|"+re.sub("[^0-9a-zA-Z.#]", "", fields[1]) if dupechk not in lines_seen: file2.write(chr(10)) lines_seen.add(dupechk) else: file2.write(affirm+chr(10)) print "it took", time.time() - start, "seconds." ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ file2.close() file1.close() It runs in less than 6 minutes, so I am happy with this part, even if it is not elegant. Right now, when I get my results, I import the results into Access and do a self join to locate the duplicates. Loading/querying/exporting results in Access a file this size takes around an hour, so I would like to be able to export the matched duplicates to another text file or an Excel file. Confusing enough? Thanks.

    Read the article

  • C++ Recursion Issue

    - by stupidmonkey
    Hi guys, I feel a little dumb asking this, but here we go... When trying to follow the Recursion example at the following website http://www.cplusplus.com/doc/tutorial/functions2/, I ran into a road bump that has me perplexed. I altered the code slightly just to get my head around the code in the recursion example and I pretty much have my head around it, but I can't figure out why the variable 'n' increments in 'Pass B' when I have not told the program to increment 'n'. Could you please help explain this? #include <stdlib.h> #include <iostream> using namespace std; long factorial (long n) { if (n > 1) { long r(0); cout << "Pass A" << endl; cout << "n = " << n << endl; cout << "r = " << r << endl; r = n * factorial (n-1); cout << "Pass B" << endl; cout << "n = " << n << endl; cout << "r = " << r << endl; return (r); } else return (1); } int main () { long number; cout << "Please type a number: "; cin >> number; cout << number << "! = " << factorial (number) << endl; system ("pause"); return 0; }

    Read the article

  • Sorting a list of numbers with modified cost

    - by David
    First, this was one of the four problems we had to solve in a project last year and I couldn’t find a suitable algorithm so we handle in a brute force solution. Problem: The numbers are in a list that is not sorted and supports only one type of operation. The operation is defined as follows: Given a position i and a position j the operation moves the number at position i to position j without altering the relative order of the other numbers. If i j, the positions of the numbers between positions j and i - 1 increment by 1, otherwise if i < j the positions of the numbers between positions i+1 and j decreases by 1. This operation requires i steps to find a number to move and j steps to locate the position to which you want to move it. Then the number of steps required to move a number of position i to position j is i+j. We need to design an algorithm that given a list of numbers, determine the optimal (in terms of cost) sequence of moves to rearrange the sequence. Attempts: Part of our investigation was around NP-Completeness, we make it a decision problem and try to find a suitable transformation to any of the problems listed in Garey and Johnson’s book: Computers and Intractability with no results. There is also no direct reference (from our point of view) to this kind of variation in Donald E. Knuth’s book: The art of Computer Programing Vol. 3 Sorting and Searching. We also analyzed algorithms to sort linked lists but none of them gives a good idea to find de optimal sequence of movements. Note that the idea is not to find an algorithm that orders the sequence, but one to tell me the optimal sequence of movements in terms of cost that organizes the sequence, you can make a copy and sort it to analyze the final position of the elements if you want, in fact we may assume that the list contains the numbers from 1 to n, so we know where we want to put each number, we are just concerned with minimizing the total cost of the steps. We tested several greedy approaches but all of them failed, divide and conquer sorting algorithms can’t be used because they swap with no cost portions of the list and our dynamic programing approaches had to consider many cases. The brute force recursive algorithm takes all the possible combinations of movements from i to j and then again all the possible moments of the rest of the element’s, at the end it returns the sequence with less total cost that sorted the list, as you can imagine the cost of this algorithm is brutal and makes it impracticable for more than 8 elements. Our observations: n movements is not necessarily cheaper than n+1 movements (unlike swaps in arrays that are O(1)). There are basically two ways of moving one element from position i to j: one is to move it directly and the other is to move other elements around i in a way that it reaches the position j. At most you make n-1 movements (the untouched element reaches its position alone). If it is the optimal sequence of movements then you didn’t move the same element twice.

    Read the article

  • Stop running this script, IE7 using PHP

    - by Jomel Dicen
    I incorporate javascript in my PHP program: Try to check my codes. It loops depend on the number of records in database. for instance: $counter = 0; foreach($row_value as $data): echo $this->javascript($counter, $data->exrate, $data->tab); endforeach; private function javascript($counter=NULL, $exrate=NULL, $tab=NULL){ $js = " <script type='text/javascript'> $(function () { var textBox0 = $('input:text[id$=quantity{$counter}]').keyup(foo); var textBox1 = $('input:text[id$=mc{$counter}]').keyup(foo); var textBox2 = $('input:text[id$=lc{$counter}]').keyup(foo); function foo() { var value0 = textBox0.val(); var value1 = textBox1.val(); var value2 = textBox2.val(); var sum = add(value1, value2) * (value0 * {$exrate}); $('input:text[id$=result{$counter}]').val(parseFloat(sum).toFixed(2)); // Compute Total Quantity var qtotal = 0; $('.quantity{$tab}').each(function() { qtotal += Number($(this).val()); }); $('#tquantity{$tab}').text(qtotal); // Compute MC UNIT var mctotal = 0; $('.mc{$tab}').each(function() { mctotal += Number($(this).val()); }); $('#tmc{$tab}').text(mctotal); // Compute LC UNIT var lctotal = 0; $('.lc{$tab}').each(function() { lctotal += Number($(this).val()); }); $('#tlc{$tab}').text(lctotal); // Compute Result var result = 0; $('.result{$tab}').each(function() { result += Number($(this).val()); }); $('#tresult{$tab}').text(result); } function add() { var sum = 0; for (var i = 0, j = arguments.length; i < j; i++) { if (IsNumeric(arguments[i])) { sum += parseFloat(arguments[i]); } } return sum; } function IsNumeric(input) { return (input - 0) == input && input.length > 0; } }); </script> "; return $js; } When I running this on IE this message is always annoying me " Stop running this script? A script on this page is causing your web browser to run slowly. If it continues to run, your computer might become unresponsive." but in firefox it's functioning well.

    Read the article

  • Code to generate random numbers in C++

    - by user1678927
    Basically I have to write a program to generate random numbers to simulate the rolling of a pair of dice. This program should be constructed in multiple files. The main function should be in one file, the other functions should be in a second source file, and their prototypes should be in a header file. First I write a short function that returns a random value between 1 and 6 to simulate the rolling of a single 6-sided die.Second, i write a function that pretends to roll a pair of dice by calling this function twice. My program starts by asking the user how many rolls should be made. Then I write a function to simulate rolling the dice this many times, keeping a count of exactly how many times the values 2,3,4,5,6,7,8,9,10,11,12(each number is the sum of a pair of dice) occur in an array. Later I write a function to display a small bar chart using these counts that ideally would look something like below for a sample of 144 rolls, where the number of asterisks printed corresponds to the count: 2 3 4 5 6 7 8 9 10 11 12 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * Next, to see how well the random number generator is doing, I write a function to compute the average value rolled. Compare this to the ideal average of 7. Also, print out a small table showing the counts of each roll made by the program, the ideal count based on the frequencies above given the total number of rolls, and the difference between these values in separate columns. This is my incomplete code so far: "Compiler visual studio 2010" int rolling(){ //Function that returns a random value between 1 and 6 rand(unsigned(time(NULL))); int dice = 1 + (rand() %6); return dice; } int roll_dice(int num1,int num2){ //it calls 'rolling function' twice int result1,result2; num1 = rolling(); num2 = rolling(); result1 = num1; result2 = num2; return result1,result2; } int main(void){ int times,i,sum,n1,n2; int c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11;//counters for each sum printf("Please enter how many times you want to roll the dice.\n") scanf_s("%i",&times); I pretend to use counters to count each sum and store the number(the count) in an array. I know i need a loop (for) and some conditional statements (if) but m main problem is to get the values from roll_dice and store them in n1 and n2 so then i can sum them up and store the sum in 'sum'.

    Read the article

  • Parsing ip addresses in php

    - by user2938780
    I am trying to get the number of active connections (Real Time) from a log file by IP connected and having a Play status but instead, it's giving me the total number of IP with status play. The number doesn't decrease at all. Keeps on increasing as soon as a new ip is added. How can I fix it? Here my code: $stringToParse = file_get_contents('wowzamediaserver_access.log'); preg_match_all('/\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}/', $stringToParse, $matchOP); echo "Number of connections: ".sizeof(array_unique($matchOP[0])); HERE IS THE LOG: 2013-10-30 14:54:36 CET stop stream INFO 200 account1 - _defaultVHost_ account1 _definst_ 149.21 streamURL 1935 fullStreamURL IP_ADDRESS_1 http (cupertino) - 2013-10-30 14:56:12 CET play stream INFO 200 account2 - _defaultVHost_ account1 _definst_ 149.21 streamURL 1935 fullStreamURL IP_ADDRESS_2 rtmp (cupertino) - 2013-10-30 14:58:23 CET stop stream INFO 200 account2 - _defaultVHost_ account1 _definst_ 149.21 streamURL 1935 fullStreamURL IP_ADDRESS_2 rtmp (cupertino) - 2013-10-30 14:58:39 CET play stream INFO 200 account1 - _defaultVHost_ account1 _definst_ 149.21 streamURL 1935 fullStreamURL IP_ADDRESS_1 http (cupertino) - 2013-10-30 14:59:12 CET play stream INFO 200 account2 - _defaultVHost_ account1 _definst_ 149.21 streamURL 1935 fullStreamURL IP_ADDRESS_2 rtmp (cupertino) - I want to be able to count the IP whenever it has a "PLAY" status and don't count it whenever it's "STOP" 2013-10-30 14:59:00 CET play stream INFO 200 tv2vielive - _defaultVHost_ tv2vielive _definst_ 0.315 [any] 1935 rtmp://tv2vie.zion3cloud.com:1935/tv2vielive 78.247.255.186 rtmp http://www.tv2vie.org/swf/flowplayer-3.2.16.swf WIN 11,7,700,202 92565864 3576 3455 1 0 0 0 tv2vielive - - - - - rtmp://tv2vie.zion3cloud.com:1935/tv2vielive/tv2vielive rtmp://tv2vie.zion3cloud.com:1935/tv2vielive/tv2vielive - rtmp://tv2vie.zion3cloud.com:1935/tv2vielive - 2013-10-30 14:59:04 CET stop stream INFO 200 tv2vielive - _defaultVHost_ tv2vielive _definst_ 4.75 [any] 1935 rtmp://tv2vie.zion3cloud.com:1935/tv2vielive 78.247.255.186 rtmp http://www.tv2vie.org/swf/flowplayer-3.2.16.swf WIN 11,7,700,202 92565864 3576 512571 1 7222 0 503766 tv2vielive - - - - - rtmp://tv2vie.zion3cloud.com:1935/tv2vielive/tv2vielive rtmp://tv2vie.zion3cloud.com:1935/tv2vielive/tv2vielive - rtmp://tv2vie.zion3cloud.com:1935/tv2vielive - Any solutions? I have even tried the first answer solution but getting "0" play connections. $stringToParse = file_get_contents('wowzamediaserver_access.log'); $pattern = '~^.* play.* ( ([0-9]{1,3}+\.){3,3}[0-9]{1,3}).*$~m'; preg_match_all($pattern, $stringToParse, $matches); echo count($matches[1]) . ' play actions'; But whenever I use my code, I am getting "Number of connections: xxxxx(actual count of IPs). My concern is that I only need the count of IPs with PLAY status. If that IP changes to STOP then it wont count.

    Read the article

  • Microsoft and jQuery

    - by Rick Strahl
    The jQuery JavaScript library has been steadily getting more popular and with recent developments from Microsoft, jQuery is also getting ever more exposure on the ASP.NET platform including now directly from Microsoft. jQuery is a light weight, open source DOM manipulation library for JavaScript that has changed how many developers think about JavaScript. You can download it and find more information on jQuery on www.jquery.com. For me jQuery has had a huge impact on how I develop Web applications and was probably the main reason I went from dreading to do JavaScript development to actually looking forward to implementing client side JavaScript functionality. It has also had a profound impact on my JavaScript skill level for me by seeing how the library accomplishes things (and often reviewing the terse but excellent source code). jQuery made an uncomfortable development platform (JavaScript + DOM) a joy to work on. Although jQuery is by no means the only JavaScript library out there, its ease of use, small size, huge community of plug-ins and pure usefulness has made it easily the most popular JavaScript library available today. As a long time jQuery user, I’ve been excited to see the developments from Microsoft that are bringing jQuery to more ASP.NET developers and providing more integration with jQuery for ASP.NET’s core features rather than relying on the ASP.NET AJAX library. Microsoft and jQuery – making Friends jQuery is an open source project but in the last couple of years Microsoft has really thrown its weight behind supporting this open source library as a supported component on the Microsoft platform. When I say supported I literally mean supported: Microsoft now offers actual tech support for jQuery as part of their Product Support Services (PSS) as jQuery integration has become part of several of the ASP.NET toolkits and ships in several of the default Web project templates in Visual Studio 2010. The ASP.NET MVC 3 framework (still in Beta) also uses jQuery for a variety of client side support features including client side validation and we can look forward toward more integration of client side functionality via jQuery in both MVC and WebForms in the future. In other words jQuery is becoming an optional but included component of the ASP.NET platform. PSS support means that support staff will answer jQuery related support questions as part of any support incidents related to ASP.NET which provides some piece of mind to some corporate development shops that require end to end support from Microsoft. In addition to including jQuery and supporting it, Microsoft has also been getting involved in providing development resources for extending jQuery’s functionality via plug-ins. Microsoft’s last version of the Microsoft Ajax Library – which is the successor to the native ASP.NET AJAX Library – included some really cool functionality for client templates, databinding and localization. As it turns out Microsoft has rebuilt most of that functionality using jQuery as the base API and provided jQuery plug-ins of these components. Very recently these three plug-ins were submitted and have been approved for inclusion in the official jQuery plug-in repository and been taken over by the jQuery team for further improvements and maintenance. Even more surprising: The jQuery-templates component has actually been approved for inclusion in the next major update of the jQuery core in jQuery V1.5, which means it will become a native feature that doesn’t require additional script files to be loaded. Imagine this – an open source contribution from Microsoft that has been accepted into a major open source project for a core feature improvement. Microsoft has come a long way indeed! What the Microsoft Involvement with jQuery means to you For Microsoft jQuery support is a strategic decision that affects their direction in client side development, but nothing stopped you from using jQuery in your applications prior to Microsoft’s official backing and in fact a large chunk of developers did so readily prior to Microsoft’s announcement. Official support from Microsoft brings a few benefits to developers however. jQuery support in Visual Studio 2010 means built-in support for jQuery IntelliSense, automatically added jQuery scripts in many projects types and a common base for client side functionality that actually uses what most developers are already using. If you have already been using jQuery and were worried about straying from the Microsoft line and their internal Microsoft Ajax Library – worry no more. With official support and the change in direction towards jQuery Microsoft is now following along what most in the ASP.NET community had already been doing by using jQuery, which is likely the reason for Microsoft’s shift in direction in the first place. ASP.NET AJAX and the Microsoft AJAX Library weren’t bad technology – there was tons of useful functionality buried in these libraries. However, these libraries never got off the ground, mainly because early incarnations were squarely aimed at control/component developers rather than application developers. For all the functionality that these controls provided for control developers they lacked in useful and easily usable application developer functionality that was easily accessible in day to day client side development. The result was that even though Microsoft shipped support for these tools in the box (in .NET 3.5 and 4.0), other than for the internal support in ASP.NET for things like the UpdatePanel and the ASP.NET AJAX Control Toolkit as well as some third party vendors, the Microsoft client libraries were largely ignored by the developer community opening the door for other client side solutions. Microsoft seems to be acknowledging developer choice in this case: Many more developers were going down the jQuery path rather than using the Microsoft built libraries and there seems to be little sense in continuing development of a technology that largely goes unused by the majority of developers. Kudos for Microsoft for recognizing this and gracefully changing directions. Note that even though there will be no further development in the Microsoft client libraries they will continue to be supported so if you’re using them in your applications there’s no reason to start running for the exit in a panic and start re-writing everything with jQuery. Although that might be a reasonable choice in some cases, jQuery and the Microsoft libraries work well side by side so that you can leave existing solutions untouched even as you enhance them with jQuery. The Microsoft jQuery Plug-ins – Solid Core Features One of the most interesting developments in Microsoft’s embracing of jQuery is that Microsoft has started contributing to jQuery via standard mechanism set for jQuery developers: By submitting plug-ins. Microsoft took some of the nicest new features of the unpublished Microsoft Ajax Client Library and re-wrote these components for jQuery and then submitted them as plug-ins to the jQuery plug-in repository. Accepted plug-ins get taken over by the jQuery team and that’s exactly what happened with the three plug-ins submitted by Microsoft with the templating plug-in even getting slated to be published as part of the jQuery core in the next major release (1.5). The following plug-ins are provided by Microsoft: jQuery Templates – a client side template rendering engine jQuery Data Link – a client side databinder that can synchronize changes without code jQuery Globalization – provides formatting and conversion features for dates and numbers The first two are ports of functionality that was slated for the Microsoft Ajax Library while functionality for the globalization library provides functionality that was already found in the original ASP.NET AJAX library. To me all three plug-ins address a pressing need in client side applications and provide functionality I’ve previously used in other incarnations, but with more complete implementations. Let’s take a close look at these plug-ins. jQuery Templates http://api.jquery.com/category/plugins/templates/ Client side templating is a key component for building rich JavaScript applications in the browser. Templating on the client lets you avoid from manually creating markup by creating DOM nodes and injecting them individually into the document via code. Rather you can create markup templates – similar to the way you create classic ASP server markup – and merge data into these templates to render HTML which you can then inject into the document or replace existing content with. Output from templates are rendered as a jQuery matched set and can then be easily inserted into the document as needed. Templating is key to minimize client side code and reduce repeated code for rendering logic. Instead a single template can be used in many places for updating and adding content to existing pages. Further if you build pure AJAX interfaces that rely entirely on client rendering of the initial page content, templates allow you to a use a single markup template to handle all rendering of each specific HTML section/element. I’ve used a number of different client rendering template engines with jQuery in the past including jTemplates (a PHP style templating engine) and a modified version of John Resig’s MicroTemplating engine which I built into my own set of libraries because it’s such a commonly used feature in my client side applications. jQuery templates adds a much richer templating model that allows for sub-templates and access to the data items. Like John Resig’s original Micro Template engine, the core basics of the templating engine create JavaScript code which means that templates can include JavaScript code. To give you a basic idea of how templates work imagine I have an application that downloads a set of stock quotes based on a symbol list then displays them in the document. To do this you can create an ‘item’ template that describes how each of the quotes is renderd as a template inside of the document: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div><div>${LastPrice}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div><div>${LastQuoteTimeString}</div> </div> </script> The ‘template’ is little more than HTML with some markup expressions inside of it that define the template language. Notice the embedded ${} expressions which reference data from the quote objects returned from an AJAX call on the server. You can embed any JavaScript or value expression in these template expressions. There are also a number of structural commands like {{if}} and {{each}} that provide for rudimentary logic inside of your templates as well as commands ({{tmpl}} and {{wrap}}) for nesting templates. You can find more about the full set of markup expressions available in the documentation. To load up this data you can use code like the following: <script type="text/javascript"> //var Proxy = new ServiceProxy("../PageMethods/PageMethodsService.asmx/"); $(document).ready(function () { $("#btnGetQuotes").click(GetQuotes); }); function GetQuotes() { var symbols = $("#txtSymbols").val().split(","); $.ajax({ url: "../PageMethods/PageMethodsService.asmx/GetStockQuotes", data: JSON.stringify({ symbols: symbols }), // parameter map type: "POST", // data has to be POSTed contentType: "application/json", timeout: 10000, dataType: "json", success: function (result) { var quotes = result.d; var jEl = $("#stockTemplate").tmpl(quotes); $("#quoteDisplay").empty().append(jEl); }, error: function (xhr, status) { alert(status + "\r\n" + xhr.responseText); } }); }; </script> In this case an ASMX AJAX service is called to retrieve the stock quotes. The service returns an array of quote objects. The result is returned as an object with the .d property (in Microsoft service style) that returns the actual array of quotes. The template is applied with: var jEl = $("#stockTemplate").tmpl(quotes); which selects the template script tag and uses the .tmpl() function to apply the data to it. The result is a jQuery matched set of elements that can then be appended to the quote display element in the page. The template is merged against an array in this example. When the result is an array the template is automatically applied to each each array item. If you pass a single data item – like say a stock quote – the template works exactly the same way but is applied only once. Templates also have access to a $data item which provides the current data item and information about the tempalte that is currently executing. This makes it possible to keep context within the context of the template itself and also to pass context from a parent template to a child template which is very powerful. Templates can be evaluated by using the template selector and calling the .tmpl() function on the jQuery matched set as shown above or you can use the static $.tmpl() function to provide a template as a string. This allows you to dynamically create templates in code or – more likely – to load templates from the server via AJAX calls. In short there are options The above shows off some of the basics, but there’s much for functionality available in the template engine. Check the documentation link for more information and links to additional examples. The plug-in download also comes with a number of examples that demonstrate functionality. jQuery templates will become a native component in jQuery Core 1.5, so it’s definitely worthwhile checking out the engine today and get familiar with this interface. As much as I’m stoked about templating becoming part of the jQuery core because it’s such an integral part of many applications, there are also a couple shortcomings in the current incarnation: Lack of Error Handling Currently if you embed an expression that is invalid it’s simply not rendered. There’s no error rendered into the template nor do the various  template functions throw errors which leaves finding of bugs as a runtime exercise. I would like some mechanism – optional if possible – to be able to get error info of what is failing in a template when it’s rendered. No String Output Templates are always rendered into a jQuery matched set and there’s no way that I can see to directly render to a string. String output can be useful for debugging as well as opening up templating for creating non-HTML string output. Limited JavaScript Access Unlike John Resig’s original MicroTemplating Engine which was entirely based on JavaScript code generation these templates are limited to a few structured commands that can ‘execute’. There’s no code execution inside of script code which means you’re limited to calling expressions available in global objects or the data item passed in. This may or may not be a big deal depending on the complexity of your template logic. Error handling has been discussed quite a bit and it’s likely there will be some solution to that particualar issue by the time jQuery templates ship. The others are relatively minor issues but something to think about anyway. jQuery Data Link http://api.jquery.com/category/plugins/data-link/ jQuery Data Link provides the ability to do two-way data binding between input controls and an underlying object’s properties. The typical scenario is linking a textbox to a property of an object and have the object updated when the text in the textbox is changed and have the textbox change when the value in the object or the entire object changes. The plug-in also supports converter functions that can be applied to provide the conversion logic from string to some other value typically necessary for mapping things like textbox string input to say a number property and potentially applying additional formatting and calculations. In theory this sounds great, however in reality this plug-in has some serious usability issues. Using the plug-in you can do things like the following to bind data: person = { firstName: "rick", lastName: "strahl"}; $(document).ready( function() { // provide for two-way linking of inputs $("form").link(person); // bind to non-input elements explicitly $("#objFirst").link(person, { firstName: { name: "objFirst", convertBack: function (value, source, target) { $(target).text(value); } } }); $("#objLast").link(person, { lastName: { name: "objLast", convertBack: function (value, source, target) { $(target).text(value); } } }); }); This code hooks up two-way linking between a couple of textboxes on the page and the person object. The first line in the .ready() handler provides mapping of object to form field with the same field names as properties on the object. Note that .link() does NOT bind items into the textboxes when you call .link() – changes are mapped only when values change and you move out of the field. Strike one. The two following commands allow manual binding of values to specific DOM elements which is effectively a one-way bind. You specify the object and a then an explicit mapping where name is an ID in the document. The converter is required to explicitly assign the value to the element. Strike two. You can also detect changes to the underlying object and cause updates to the input elements bound. Unfortunately the syntax to do this is not very natural as you have to rely on the jQuery data object. To update an object’s properties and get change notification looks like this: function updateFirstName() { $(person).data("firstName", person.firstName + " (code updated)"); } This works fine in causing any linked fields to be updated. In the bindings above both the firstName input field and objFirst DOM element gets updated. But the syntax requires you to use a jQuery .data() call for each property change to ensure that the changes are tracked properly. Really? Sure you’re binding through multiple layers of abstraction now but how is that better than just manually assigning values? The code savings (if any) are going to be minimal. As much as I would like to have a WPF/Silverlight/Observable-like binding mechanism in client script, this plug-in doesn’t help much towards that goal in its current incarnation. While you can bind values, the ‘binder’ is too limited to be really useful. If initial values can’t be assigned from the mappings you’re going to end up duplicating work loading the data using some other mechanism. There’s no easy way to re-bind data with a different object altogether since updates trigger only through the .data members. Finally, any non-input elements have to be bound via code that’s fairly verbose and frankly may be more voluminous than what you might write by hand for manual binding and unbinding. Two way binding can be very useful but it has to be easy and most importantly natural. If it’s more work to hook up a binding than writing a couple of lines to do binding/unbinding this sort of thing helps very little in most scenarios. In talking to some of the developers the feature set for Data Link is not complete and they are still soliciting input for features and functionality. If you have ideas on how you want this feature to be more useful get involved and post your recommendations. As it stands, it looks to me like this component needs a lot of love to become useful. For this component to really provide value, bindings need to be able to be refreshed easily and work at the object level, not just the property level. It seems to me we would be much better served by a model binder object that can perform these binding/unbinding tasks in bulk rather than a tool where each link has to be mapped first. I also find the choice of creating a jQuery plug-in questionable – it seems a standalone object – albeit one that relies on the jQuery library – would provide a more intuitive interface than the current forcing of options onto a plug-in style interface. Out of the three Microsoft created components this is by far the least useful and least polished implementation at this point. jQuery Globalization http://github.com/jquery/jquery-global Globalization in JavaScript applications often gets short shrift and part of the reason for this is that natively in JavaScript there’s little support for formatting and parsing of numbers and dates. There are a number of JavaScript libraries out there that provide some support for globalization, but most are limited to a particular portion of globalization. As .NET developers we’re fairly spoiled by the richness of APIs provided in the framework and when dealing with client development one really notices the lack of these features. While you may not necessarily need to localize your application the globalization plug-in also helps with some basic tasks for non-localized applications: Dealing with formatting and parsing of dates and time values. Dates in particular are problematic in JavaScript as there are no formatters whatsoever except the .toString() method which outputs a verbose and next to useless long string. With the globalization plug-in you get a good chunk of the formatting and parsing functionality that the .NET framework provides on the server. You can write code like the following for example to format numbers and dates: var date = new Date(); var output = $.format(date, "MMM. dd, yy") + "\r\n" + $.format(date, "d") + "\r\n" + // 10/25/2010 $.format(1222.32213, "N2") + "\r\n" + $.format(1222.33, "c") + "\r\n"; alert(output); This becomes even more useful if you combine it with templates which can also include any JavaScript expressions. Assuming the globalization plug-in is loaded you can create template expressions that use the $.format function. Here’s the template I used earlier for the stock quote again with a couple of formats applied: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div> <div>${$.format(LastPrice,"N2")}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div> <div>${$.format(LastQuoteTime,"MMM dd, yyyy")}</div> </div> </script> There are also parsing methods that can parse dates and numbers from strings into numbers easily: alert($.parseDate("25.10.2010")); alert($.parseInt("12.222")); // de-DE uses . for thousands separators As you can see culture specific options are taken into account when parsing. The globalization plugin provides rich support for a variety of locales: Get a list of all available cultures Query cultures for culture items (like currency symbol, separators etc.) Localized string names for all calendar related items (days of week, months) Generated off of .NET’s supported locales In short you get much of the same functionality that you already might be using in .NET on the server side. The plugin includes a huge number of locales and an Globalization.all.min.js file that contains the text defaults for each of these locales as well as small locale specific script files that define each of the locale specific settings. It’s highly recommended that you NOT use the huge globalization file that includes all locales, but rather add script references to only those languages you explicitly care about. Overall this plug-in is a welcome helper. Even if you use it with a single locale (like en-US) and do no other localization, you’ll gain solid support for number and date formatting which is a vital feature of many applications. Changes for Microsoft It’s good to see Microsoft coming out of its shell and away from the ‘not-built-here’ mentality that has been so pervasive in the past. It’s especially good to see it applied to jQuery – a technology that has stood in drastic contrast to Microsoft’s own internal efforts in terms of design, usage model and… popularity. It’s great to see that Microsoft is paying attention to what customers prefer to use and supporting the customer sentiment – even if it meant drastically changing course of policy and moving into a more open and sharing environment in the process. The additional jQuery support that has been introduced in the last two years certainly has made lives easier for many developers on the ASP.NET platform. It’s also nice to see Microsoft submitting proposals through the standard jQuery process of plug-ins and getting accepted for various very useful projects. Certainly the jQuery Templates plug-in is going to be very useful to many especially since it will be baked into the jQuery core in jQuery 1.5. I hope we see more of this type of involvement from Microsoft in the future. Kudos!© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  ASP.NET  

    Read the article

  • Plan Caching and Query Memory Part I – When not to use stored procedure or other plan caching mechanisms like sp_executesql or prepared statement

    - by sqlworkshops
      The most common performance mistake SQL Server developers make: SQL Server estimates memory requirement for queries at compilation time. This mechanism is fine for dynamic queries that need memory, but not for queries that cache the plan. With dynamic queries the plan is not reused for different set of parameters values / predicates and hence different amount of memory can be estimated based on different set of parameter values / predicates. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Sort with examples. It is recommended to read Plan Caching and Query Memory Part II after this article which covers Hash Match operations.   When the plan is cached by using stored procedure or other plan caching mechanisms like sp_executesql or prepared statement, SQL Server estimates memory requirement based on first set of execution parameters. Later when the same stored procedure is called with different set of parameter values, the same amount of memory is used to execute the stored procedure. This might lead to underestimation / overestimation of memory on plan reuse, overestimation of memory might not be a noticeable issue for Sort operations, but underestimation of memory will lead to spill over tempdb resulting in poor performance.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a stored procedure does not change significantly based on predicates.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts   Enough theory, let’s see an example where we sort initially 1 month of data and then use the stored procedure to sort 6 months of data.   Let’s create a stored procedure that sorts customers by name within certain date range.   --Example provided by www.sqlworkshops.com create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1)       end go Let’s execute the stored procedure initially with 1 month date range.   set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 48 ms to complete.     The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.       The estimated number of rows, 43199.9 is similar to actual number of rows 43200 and hence the memory estimation should be ok.       There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 679 ms to complete.      The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.      The estimated number of rows, 43199.9 is way different from the actual number of rows 259200 because the estimation is based on the first set of parameter value supplied to the stored procedure which is 1 month in our case. This underestimation will lead to sort spill over tempdb, resulting in poor performance.      There was Sort Warnings in SQL Profiler.    To monitor the amount of data written and read from tempdb, one can execute select num_of_bytes_written, num_of_bytes_read from sys.dm_io_virtual_file_stats(2, NULL) before and after the stored procedure execution, for additional information refer to the webcast: www.sqlworkshops.com/webcasts.     Let’s recompile the stored procedure and then let’s first execute the stored procedure with 6 month date range.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts for further details.   exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go Now the stored procedure took only 294 ms instead of 679 ms.    The stored procedure was granted 26832 KB of memory.      The estimated number of rows, 259200 is similar to actual number of rows of 259200. Better performance of this stored procedure is due to better estimation of memory and avoiding sort spill over tempdb.      There was no Sort Warnings in SQL Profiler.       Now let’s execute the stored procedure with 1 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 49 ms to complete, similar to our very first stored procedure execution.     This stored procedure was granted more memory (26832 KB) than necessary memory (6656 KB) based on 6 months of data estimation (259200 rows) instead of 1 month of data estimation (43199.9 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is 6 months in this case. This overestimation did not affect performance, but it might affect performance of other concurrent queries requiring memory and hence overestimation is not recommended. This overestimation might affect performance Hash Match operations, refer to article Plan Caching and Query Memory Part II for further details.    Let’s recompile the stored procedure and then let’s first execute the stored procedure with 2 day date range. exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-02' go The stored procedure took 1 ms.      The stored procedure was granted 1024 KB based on 1440 rows being estimated.      There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go   The stored procedure took 955 ms to complete, way higher than 679 ms or 294ms we noticed before.      The stored procedure was granted 1024 KB based on 1440 rows being estimated. But we noticed in the past this stored procedure with 6 month date range needed 26832 KB of memory to execute optimally without spill over tempdb. This is clear underestimation of memory and the reason for the very poor performance.      There was Sort Warnings in SQL Profiler. Unlike before this was a Multiple pass sort instead of Single pass sort. This occurs when granted memory is too low.      Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined date range.   Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, recompile)       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.      The stored procedure with 1 month date range has good estimation like before.      The stored procedure with 6 month date range also has good estimation and memory grant like before because the query was recompiled with current set of parameter values.      The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.     Let’s recreate the stored procedure with optimize for hint of 6 month date range.   --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, optimize for (@CreationDateFrom = '2001-01-01', @CreationDateTo ='2001-06-30'))       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.    The stored procedure with 1 month date range has overestimation of rows and memory. This is because we provided hint to optimize for 6 months of data.      The stored procedure with 6 month date range has good estimation and memory grant because we provided hint to optimize for 6 months of data.       Let’s execute the stored procedure with 12 month date range using the currently cashed plan for 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-12-31' go The stored procedure took 1138 ms to complete.      2592000 rows were estimated based on optimize for hint value for 6 month date range. Actual number of rows is 524160 due to 12 month date range.      The stored procedure was granted enough memory to sort 6 month date range and not 12 month date range, so there will be spill over tempdb.      There was Sort Warnings in SQL Profiler.      As we see above, optimize for hint cannot guarantee enough memory and optimal performance compared to recompile hint.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case. I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.     Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.     Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • SChannel "cannot find certificate in either LocalMachine or CurrentUser store"

    - by Chris J
    We have an in-house application that requires the use of client SSL certificates to authenticate with a remote server (not under our control). This has worked without problems before but on deploying to a new server, we're having problems getting Windows 2008 to use the certificate. The certificate exists as a .pfx file that contains a private key. The same certificate exists in the LocalMachine store, again with its private key. We've ensured the one in the LocalMachine store is correct by creating a website in IIS against that certificate, so we're happy that the certificate, certificate chain, and private key is valid. The PFX has been created by exporting from the Certificates MMC snap-in. The issue is that we get the following in the system diagnostic logs that suggests it can't find the private key: System.Net Information: 0 : [5988] SecureChannel#23264094 – Locating the private key for the certificate: [Subject] CN=internal-server.company.com, OU=Servers, OU=Devices, O=org [Issuer] CN=SubCA02, OU=CA, o=org [Serial Number] 407ABCDE [Not Before] 31/10/2013 11:08:48 AM [Not After] 31/10/2016 11:08:48 AM [Thumbprint] 4354A34F6004F019E60F055979A47E50F62D1504 . System.Net Information: 0 : [5988] SecureChannel#23264094 – Cannot find the certificate in either the LocalMachine store or the CurrentUser store. I've validated the thumbprint, issuer and serial number listed in the log with the certificate in the LocalMachine store and these marry up. From what I can tell with much searching, this appears to be a permissions issue. The user the application is running as has been granted access to the private key (Personal Certificates - right click on the certificate - all tasks - Manage Private Keys), so I'm now at a loss as to which permission(s) it may be that is causing the issue.

    Read the article

  • Smart Array P400 - Accelerator Replacement Battery Failure

    - by inflammable
    TL;DR - Is the immediate failure of a replacement battery, for a failed battery, on a battery backed accelerator for a Smart Array P400 controller a common occurrence? Or are we likely to have an storage controller with an impending and critical fault? We have a slightly confusing situation with a Smart Array P400 storage controller with the 512mb battery backed accelerator addon on an HP DL380 server. The storage controller is (afaik) running the latest firmware and driver: Model: Smart Array P400 Controller Status: OK Firmware Version: 7.24 Serial Number: *snip* Rebuild Priority: Medium Expand Priority: Medium Number Of Ports: 2 The storage diagnostic (both on the both boot-up screen for the controller and within the 'Management Homepage' and the 'HP Array Diagnostic Utility') recently starting showing the following status a fault for the battery for the accelerator: Accelerator Status: Temporarily Disabled Error Code: Cache Disabled Low Batteries Serial Number: *snip* Total Memory: 524288 KB Read Cache: 25% Write Cache: 75% Battery Status: Failed Read Errors: 0 Write Errors: 0 We replaced the battery with a new unit (a visual inspection of the P400 card showing nothing unusual) and saw the same fault - but expected this to disappear over the course of a few hours/days as it charged. This didn't happy, and the fault status remains the same as above. Given the battery is a genuine part from HP, I wouldn't have expected a replacement battery to fail straight away, or to be dead-on-arrival (is that naivety on my part?). Is the immediate failure of a replacement battery, for a failed battery, on a battery backed accelerator a common occurrence? Or are we likely to have an storage controller with an impending and critical fault? Is there any diagnostic that could tell me more about the failed battery, without cracking the server open again? Many thanks!

    Read the article

  • blu-ray archiving in vmware ESXi 4

    - by spacecadet77
    Hi, I need some advice about using blu-ray writer for archiving data on vmware ESXi 4. At office we have IBM System x3400 Tower server with ESXi 4 hipervisor and OpenSuse and CentOS GNU/Linux system as guests. Will blu-ray writer work in this setup, and if it will is there any particular model you can suggest. Best regards IBM System x3400 Tower server specification: 1x Intel Quad-Core Xeon E5410 2.33GHz/ 12MB/ 1333MHz (2x CPU max) Intel 5000P chipset, 2x 1GB PC2-5300 DDR2 667MHz SDRAM ECC Chipkill (32GB max) 2x4GB (2x2GB) PC2-5300 CL5 ECC DDR2 FBDIMM (x3400, x3550, x3650) SAS/SATA Hot-Swap Open Bay (0xHDD std, 4xHDD max, 8xHDD optional) ServeRAID 8K dual channel SAS/SATA controller (RAID 0,1,1E,10,5,6, 256MB, Battery Backup) Graphics ATI® RN50(ES1000) 16MB DDR, CD-RW/DVD Combo no FDD GigaEthernet, Tower with Power Supply 835W (opt Redudant) Slot 1: half-length, PCI-Express x8(x4 electrical) Slot 2: full, PCI-Express x8 Slot 3: full, PCI-Express x8 Slot 4: full, 64-bit 133MHz 3.3v PCI-X Slot 5: full, 64-bit 133MHz 3.3v PCI-X , Slot 6: half-length, 32-bit 33MHz 5.0v PCI ports: 4x USB (Vers 2.0), 2x PS/2, parallel, 2x serial (9-pin), VGA, RJ-45 (ethernet ), RJ-45 (sys mgm) HDD 4 x TB 7200rpm / Serial ATA II 3.0Gb/s / 16MB, RoHS

    Read the article

  • Problem with switch dell 6224

    - by Matias
    Hello, we just have upgraded the firmware of a dell 6224 power connect switch and it won't reload. These are the symptons: - I power up the switch having the serial cable connected to it and the switch outputs nothing. The configuration of the serial console is fine: 9600 bds, etc... In fact, before the upgrade, I was connected to the switch through the very same cable. - Reseting the switch with its reset pinhole does not reset the switch: the power and fan lights powers off while I keep pressed the pinhole, but the switch itself does not resets. - When I connect an UTP cable to one of the switches port, the green lights don't flash, but ''mii-tool eth0'' in my laptop shows there is link!! The only thing I see in the output, different from other upgrades I've done, is this line at the end: Erasing Boot Flash.....^^^^Done. Any help or idea will be more than welcome!! Thanks!! console#show version Image Descriptions image1 : image2 : Images currently available on Flash -------------------------------------------------------------------- unit image1 image2 current-active next-active 1 <none> 3.0.0.8 image2 image2 console#boot system image2 Activating image image2 .. console#update bootcode Update bootcode and reset (Y/N)? Updating boot code ... Extracting boot code from image... Erasing Boot Flash.....^^^^Done.

    Read the article

  • Is it possible to shrink the size of an HP Smart Array logical drive?

    - by ewwhite
    I know extension is quite possible using the hpacucli utility, but is there an easy way to reduce the size of an existing logical drive (not array)? The controller is a P410i in a ProLiant DL360 G6 server. I'd like to reduce logicaldrive 1 from 72GB to 40GB. => ctrl all show config detail Smart Array P410i in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: 5001438006FD9A50 Cache Serial Number: PAAVP9VYFB8Y RAID 6 (ADG) Status: Disabled Controller Status: OK Chassis Slot: Hardware Revision: Rev C Firmware Version: 3.66 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 3 secs Surface Scan Mode: Idle Queue Depth: Automatic Monitor and Performance Delay: 60 min Elevator Sort: Enabled Degraded Performance Optimization: Disabled Inconsistency Repair Policy: Disabled Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 15 secs Cache Board Present: True Cache Status: OK Accelerator Ratio: 25% Read / 75% Write Drive Write Cache: Enabled Total Cache Size: 512 MB No-Battery Write Cache: Disabled Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True Array: A Interface Type: SAS Unused Space: 412476 MB Status: OK Logical Drive: 1 Size: 72.0 GB Fault Tolerance: RAID 1+0 Heads: 255 Sectors Per Track: 32 Cylinders: 18504 Strip Size: 256 KB Status: OK Array Accelerator: Enabled Unique Identifier: 600508B1001C132E4BBDFAA6DAD13DA3 Disk Name: /dev/cciss/c0d0 Mount Points: /boot 196 MB, / 12.0 GB, /usr 8.0 GB, /var 4.0 GB, /tmp 2.0 GB OS Status: LOCKED Logical Drive Label: AE438D6A5001438006FD9A50BE0A Mirror Group 0: physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 146 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 146 GB, OK) Mirror Group 1: physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 146 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 146 GB, OK) SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250 Device Number: 250 Firmware Version: RevC WWID: 5001438006FD9A5F Vendor ID: PMCSIERA Model: SRC 8x6G

    Read the article

  • Time drift in Cloud Server - need to mainpulate GRUB config

    - by Aditya Advani
    We are hosting a VPS on a popular host and are experiencing a regular time drift of several minutes a day forward (approx 7). Linux Kernel: 2.6.18-164.11.1.el5 GNU/Linux Distro: CentOS release 5.4 (Final) We reached out to our hosting provider and their support advised us " This is a known issue with Cloud Servers. To fix this you will need to add one line to your grub config located at: /boot/grub/menu.lst The line you need to add is: noapic nolapic divider=10 nolapic_timer This should correct this issue. You will need to restart after this is added in. " Because I am wary of manipulating grub, mostly I'm terrified that our server may fail to restart - I ask you guys, the pro *nix admins - where exactly in this file does the recommended insertion below: # line from 1&1 for time syncing issue (Case 5163) noapic nolapic divider=10 nolapic_timer go? Please specify where exactly, and whether the order of commands is or is not important. Why is the block below "title CentOS ..." indented? If someone could give me an overview of how this works or point me to a resource that's easy to follow, that's what I'm looking for immediately, a light overview or basic understanding of what I;m doing. If GRUB and bootloaders are a deep dark treasure trove of kernel hacking or something, that's great well-recommended in-depth resources are also very welcome. This is my current /boot/grub/menu.lst # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file #boot=/dev/sda # serial --unit=0 --speed=57600 terminal --timeout=5 serial console timeout=5 title CentOS (2.6.18-164.11.1.el5) root (hd0,0) kernel /boot/vmlinuz-2.6.18-164.11.1.el5 ro root=/dev/hda1 console=tty0 console=tty initrd /boot/initrd-2.6.18-164.11.1.el5.img MOST IMPORTANT: I need to know where in the file above it is appropriate to paste the suggested line so I can confidently restart my VPS after manipulating GRUB config

    Read the article

  • Communicating via Command Mode with IBM HS22 IMM via AMM

    - by MikeyB
    On previous model blades that contained a BMC, I was able to communicate from our external management station via pass-through commands to the BMC to do things such as power blades on/off, set VPD parameters, reboot the BMC, etc. Now on the HS22, a bunch of things happen differently. For example, we can no longer use the same pass-through commands to write VPD information pages and have them persist across reboots of the IMM - it looks as though those VPD pages are populated from information contained in the IMM. How do we use the Advanced Settings Utility from an external host to communicate with HS22 IMMs? Alternatively, what TCP Command Mode commands do we need to send to the AMM to communicate with the IMM? For our purposes, we specifically cannot communicate with the IMM from the blade itself. Specific example: When I send a pass-thru IPMI command via the AMM to the blade BMC to write information (such as MTM, Serial) into VPD page 0x10, it persists on blades with a BMC (HS21 for example). I can send the same IPMI command to write data to the VPD page on the HS22, however it does not persist across reboots of the IMM. What IPMI commands do I need to send to the IMM? What IPMI commands are asu sending when it sets the MTM & Serial?

    Read the article

  • Ubuntu xrandr rotate issue

    - by user83544
    I've just bought a second monitor for my PC which happens to be a pivot monitor. I've already read lots of forums related to my problem but haven't come across a solution - I have the same symptoms as dozens of posts but no matter whatever I try it just doesn't work. I've already changed the xorg.conf file and added in the device section just under Driver "nvidia" the following for my second monitor: Option "RandRRotation" "on" When I save and reboot I try to rotate my screen with the nvidia X server settings by choosing the second monitor and clicking either "left" or "right" for the rotation. It immediately exits the nvidia settings window and does nothing. I tried within the terminal by typing: xrandr -o right I get the following error: X Error of failed request: BadMatch (invalid parameter attributes) Major opcode of failed request: 154 (RANDR) Minor opcode of failed request: 2 (RRSetScreenConfig) Serial number of failed request: 14 Current serial number in output stream: 14 I actually manage to rotate it with Option "Rotate" "CCW" instead of "RandRRotation". The problem with this solution is that you get the second monitor in the right position, but any window you open on that screen is practically unchangeable. You can't change the size nor move it, making it useless for reading PDFs, which is the main reason why I bought this second screen to help me write my thesis. Any help is really appreciated. sudo lshw -c video hiram@hiram-linux:~$ sudo lshw -c video *-display description: VGA compatible controller product: nVidia Corporation vendor: nVidia Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nvidia latency=0 resources: irq:16 memory:f8000000-f9ffffff memory:d8000000-dfffffff memory:d4000000-d7ffffff ioport:dc00(size=12 memory:fbd80000-fbdfffff

    Read the article

  • DIR $file "File Not Found" vs DIR $filedir shows it....not permissions, not USB

    - by Kev
    I was having this problem before on a USB drive, but now it's happening on my main RAID5-backed hard disk: 2013-10-17 9:37 C:\>dir "C:\Shares\Shared\Reference\Safety Management System\Vid eo CD\AutoPlay\Docs\Manuel*" Volume in drive C has no label. Volume Serial Number is 3C18-E114 Directory of C:\Shares\Shared\Reference\Safety Management System\Video CD\AutoP lay\Docs 2003-09-09 11:29 PM 1,056,768 Manuel d'intervention d'urgence MFC.doc 2004-06-20 10:36 PM 139,849 Manuel d'intervention d'urgence MFC.pdf 2 File(s) 1,196,617 bytes 0 Dir(s) 196,068,691,968 bytes free 2013-10-17 9:38 C:\>dir "C:\Shares\Shared\Reference\Safety Management System\Vid eo CD\AutoPlay\Docs\Manuel d'intervention d'urgence MFC.doc" Volume in drive C has no label. Volume Serial Number is 3C18-E114 Directory of C:\Shares\Shared\Reference\Safety Management System\Video CD\AutoP lay\Docs File Not Found 2013-10-17 9:38 C:\> This is from a Command Prompt window where I went to Properties and told it I wanted to modify who it ran as. I opened it, had it run as me with the "restricted access" unchecked, then ran the above. The file in question has the following ACLs: Administrators, SYSTEM, and OurCompanyUsers. All three have full control of everything. Nobody has any Deny bits set. I am a member of Administrators. So I don't believe it's a permissions issue. It's not a USB drive, so this time there is no question of USB hardware. Windows Server 2003 Standard Edition SP2. What does this mean? Is this more likely a hardware or software problem?

    Read the article

  • Are SATA II and SATA 3.0 Gbps compatible?

    - by Johnny Maelstrom
    I am trying to check that if I buy a new internal HDD it will work in the NAS I am buying. Currently I'm confused about naming schemes and once that is resolved whether there is compatibility. I will gladly author this question to be more general if there is not already an article helping with the confusion of SATA naming and standards. I see similar, but not identical questions and will accept this as a duplicate if thought as such. The specifications on the eCommerce site for the NAS says, "Controller Interface Type Serial ATA-150", the product home page for the manufacturer says, "Compatible with SATA and SATA II HDD". The specifications on the eCommerce site for the hard drives say, "Interface Type Serial ATA-300", the product home page for the manufacturer says, "Interface SATA 3.0 Gbps" Wikipedia says many things about different naming conventions, the closest being, "SATA II 3.0 Gbit/s, which was colloquially referred to as "SATA 3G" [bps] or "SATA 300" [MB/s] since 1.5 Gbit/s SATA I and 1.5 Gbit/s SATA II were referred to as both "SATA 1.5G" [b/s] or "SATA 150" [MB/s]). Therefore, they will operate with negligible differences between them." Are SATA II and SATA 3.0 Gbps the same? I feel I'm tantalisingly close to getting a definitive answer here before I purchase, but really want to clear up these naming schemes.

    Read the article

  • Architecture for highly available MySQL with automatic failover in physically diverse locations

    - by Warner
    I have been researching high availability (HA) solutions for MySQL between data centers. For servers located in the same physical environment, I have preferred dual master with heartbeat (floating VIP) using an active passive approach. The heartbeat is over both a serial connection as well as an ethernet connection. Ultimately, my goal is to maintain this same level of availability but between data centers. I want to dynamically failover between both data centers without manual intervention and still maintain data integrity. There would be BGP on top. Web clusters in both locations, which would have the potential to route to the databases between both sides. If the Internet connection went down on site 1, clients would route through site 2, to the Web cluster, and then to the database in site 1 if the link between both sites is still up. With this scenario, due to the lack of physical link (serial) there is a more likely chance of split brain. If the WAN went down between both sites, the VIP would end up on both sites, where a variety of unpleasant scenarios could introduce desync. Another potential issue I see is difficulty scaling this infrastructure to a third data center in the future. The network layer is not a focus. The architecture is flexible at this stage. Again, my focus is a solution for maintaining data integrity as well as automatic failover with the MySQL databases. I would likely design the rest around this. Can you recommend a proven solution for MySQL HA between two physically diverse sites? Thank you for taking the time to read this. I look forward to reading your recommendations.

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >