Search Results

Search found 15634 results on 626 pages for 'foreach loop container'.

Page 271/626 | < Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >

  • Disabling VLC's crash message

    - by geotheory
    I have a Mac Mini running videos on loop to display publicly, which occasionally crashes I think generally when other OSX functions kicking in. I have a script to detect when VLC is not running and reboot it, but there is often a system message on top "VLC crashed previously". Is there a way to disable this (I can find no option in VLC's advanced options), or perhaps to feed a 'Continue' response to it via Applescript?

    Read the article

  • download a series of files automatically using command line/wget

    - by anasnakawa
    I have a case that I would like to trigger an automatic download for a list of 114 file (recitation) for each reader, for example if i want to download the recitations for a reader called abkr, the urls for the files will look like the following.. http://server6.mp3quran.net/abkr/001.mp3 http://server6.mp3quran.net/abkr/002.mp3 ... http://server6.mp3quran.net/abkr/113.mp3 http://server6.mp3quran.net/abkr/114.mp3 simply these are Quran recitations, so they are always have a total of 114 is there an easy way to loop that using command line on Windows ?

    Read the article

  • how to change the existing printed line in AWK

    - by manimaran
    Hi, when i execute the following line, its prints the words in newline. awk 'BEGIN { print "line one\nline two\nline three" }' like line one line two line three How can i print the info in the same line with flush the existing line. For example, while executing the loop, it should print 'one' then wipe out the line and prints 'two' then wipe out the line and prints 'three' etc. can you please assist me?

    Read the article

  • What is a good operating system for relative newbies for setting up a website and source control? [on hold]

    - by Zeroth
    I'm part of a small group of people working on video games. We want to set up our own boxes to serve the website and handle the source control for development work. I have the most technical expertise, but its been a few years since I've set up or played with Linux or other open source operating systems. So I'm a bit out of the loop on what are the user-friendly open source operating systems out there right now?

    Read the article

  • Flex actionscript extending DateChooser, events in calendar

    - by Nemi
    ExtendedDateChooser class is great solution for simple event calendar used in my flex project. You can find it if google for "Adding-Calendar-Event-Entries-to-the-Flex-DateChooser-Component" with a link of updated solution in comments of the post. I posted files below. Problem in that calendar is text events are missing when month is changed. Is there updateCompleted event in Actionscript just like in dateChooser flex component? Like in: <mx:DateChooser id="dc" updateCompleted="goThroughDateChooserCalendarLayoutAndSetEventsInCalendarAgain()"</mx> When scroll event is added, which is available in Actionscript, it gets dispatched but after updateDisplayList() is fired, so didn't manage to answer, why are calendar events erased? Any suggestions, what to add in code, maybe override some function? ExtendedDateChooserClass.mxml <?xml version='1.0' encoding="utf-8"?> <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" xmlns:mycomp="cyberslingers.controls.*" layout="absolute" creationComplete="init()"> <mx:Script> <![CDATA[ import cyberslingers.controls.ExtendedDateChooser; import mx.rpc.events.ResultEvent; import mx.rpc.events.FaultEvent; import mx.controls.Alert; public var mycal:ExtendedDateChooser = new ExtendedDateChooser(); // collection to hold date, data and label [Bindable] public var dateCollection:XMLList = new XMLList(); private function init():void { eventList.send(); } private function readCollection(event:ResultEvent):void { dateCollection = event.result.calendarevent; //Position and size the calendar mycal.width = 400; mycal.height = 400; //Add the data from the XML file to the calendar mycal.dateCollection = dateCollection; //Add the calendar to the canvas this.addChild(mycal); } private function readFaultHandler(event:FaultEvent):void { Alert.show(event.fault.message, "Could not load data"); } ]]> </mx:Script> <mx:HTTPService id="eventList" url="data.xml" resultFormat="e4x" result="readCollection(event);" fault="readFaultHandler(event);"/> </mx:Application> ExtendedDateChooser.as package cyberslingers.controls { import flash.events.Event; import flash.events.TextEvent; import mx.collections.ArrayCollection; import mx.controls.Alert; import mx.controls.CalendarLayout; import mx.controls.DateChooser; import mx.core.UITextField; import mx.events.FlexEvent; public class ExtendedDateChooser extends DateChooser { public function ExtendedDateChooser() { super(); this.addEventListener(TextEvent.LINK, linkHandler); this.addEventListener(FlexEvent.CREATION_COMPLETE, addEvents); } //datasource public var dateCollection:XMLList = new XMLList(); //-------------------------------------- // Add events //-------------------------------------- /** * Loop through calendar control and add event links * @param e */ private function addEvents(e:Event):void { // loop through all the calendar children for(var i:uint = 0; i < this.numChildren; i++) { var calendarObj:Object = this.getChildAt(i); // find the CalendarLayout object if(calendarObj.hasOwnProperty("className")) { if(calendarObj.className == "CalendarLayout") { var cal:CalendarLayout = CalendarLayout(calendarObj); // loop through all the CalendarLayout children for(var j:uint = 0; j < cal.numChildren; j++) { var dateLabel:Object = cal.getChildAt(j); // find all UITextFields if(dateLabel.hasOwnProperty("text")) { var day:UITextField = UITextField(dateLabel); var dayHTML:String = day.text; day.selectable = true; day.wordWrap = true; day.multiline = true; day.styleName = "EventLabel"; //TODO: passing date as string is not ideal, tough to validate //Make sure to add one to month since it is zero based var eventArray:Array = dateHelper((this.displayedMonth+1) + "/" + dateLabel.text + "/" + this.displayedYear); if(eventArray.length > 0) { for(var k:uint = 0; k < eventArray.length; k++) { dayHTML += "<br><A HREF='event:" + eventArray[k].data + "' TARGET=''>" + eventArray[k].label + "</A>"; } day.htmlText = dayHTML; } } } } } } } //-------------------------------------- // Events //-------------------------------------- /** * Handle clicking text link * @param e */ private function linkHandler(event:TextEvent):void { // What do we want to do when user clicks an entry? Alert.show("selected: " + event.text); } //-------------------------------------- // Helpers //-------------------------------------- /** * Build array of events for current date * @param string - current date * */ private function dateHelper(renderedDate:String):Array { var result:Array = new Array(); for(var i:uint = 0; i < dateCollection.length(); i++) { if(dateCollection[i].date == renderedDate) { result.push(dateCollection[i]); } } return result; } } } data.xml <?xml version="1.0" encoding="utf-8"?> <rss> <calendarevent> <date>8/22/2009</date> <data>This is a test 1</data> <label>Stephens Test 1</label> </calendarevent> <calendarevent> <date>8/23/2009</date> <data>This is a test 2</data> <label>Stephens Test 2</label> </calendarevent> </rss>

    Read the article

  • Polynomial division overloading operator (solved)

    - by Vlad
    Ok. here's the operations i successfully code so far thank's to your help: Adittion: polinom operator+(const polinom& P) const { polinom Result; constIter i = poly.begin(), j = P.poly.begin(); while (i != poly.end() && j != P.poly.end()) { //logic while both iterators are valid if (i->pow > j->pow) { //if the current term's degree of the first polynomial is bigger Result.insert(i->coef, i->pow); i++; } else if (j->pow > i->pow) { // if the other polynomial's term degree is bigger Result.insert(j->coef, j->pow); j++; } else { // if both are equal Result.insert(i->coef + j->coef, i->pow); i++; j++; } } //handle the remaining items in each list //note: at least one will be equal to end(), but that loop will simply be skipped while (i != poly.end()) { Result.insert(i->coef, i->pow); ++i; } while (j != P.poly.end()) { Result.insert(j->coef, j->pow); ++j; } return Result; } Subtraction: polinom operator-(const polinom& P) const //fixed prototype re. const-correctness { polinom Result; constIter i = poly.begin(), j = P.poly.begin(); while (i != poly.end() && j != P.poly.end()) { //logic while both iterators are valid if (i->pow > j->pow) { //if the current term's degree of the first polynomial is bigger Result.insert(-(i->coef), i->pow); i++; } else if (j->pow > i->pow) { // if the other polynomial's term degree is bigger Result.insert(-(j->coef), j->pow); j++; } else { // if both are equal Result.insert(i->coef - j->coef, i->pow); i++; j++; } } //handle the remaining items in each list //note: at least one will be equal to end(), but that loop will simply be skipped while (i != poly.end()) { Result.insert(i->coef, i->pow); ++i; } while (j != P.poly.end()) { Result.insert(j->coef, j->pow); ++j; } return Result; } Multiplication: polinom operator*(const polinom& P) const { polinom Result; constIter i, j, lastItem = Result.poly.end(); Iter it1, it2, first, last; int nr_matches; for (i = poly.begin() ; i != poly.end(); i++) { for (j = P.poly.begin(); j != P.poly.end(); j++) Result.insert(i->coef * j->coef, i->pow + j->pow); } Result.poly.sort(SortDescending()); lastItem--; while (true) { nr_matches = 0; for (it1 = Result.poly.begin(); it1 != lastItem; it1++) { first = it1; last = it1; first++; for (it2 = first; it2 != Result.poly.end(); it2++) { if (it2->pow == it1->pow) { it1->coef += it2->coef; nr_matches++; } } nr_matches++; do { last++; nr_matches--; } while (nr_matches != 0); Result.poly.erase(first, last); } if (nr_matches == 0) break; } return Result; } Division(Edited): polinom operator/(const polinom& P) const { polinom Result, temp2; polinom temp = *this; Iter i = temp.poly.begin(); constIter j = P.poly.begin(); int resultSize = 0; if (temp.poly.size() < 2) { if (i->pow >= j->pow) { Result.insert(i->coef / j->coef, i->pow - j->pow); temp = temp - Result * P; } else { Result.insert(0, 0); } } else { while (true) { if (i->pow >= j->pow) { Result.insert(i->coef / j->coef, i->pow - j->pow); if (Result.poly.size() < 2) temp2 = Result; else { temp2 = Result; resultSize = Result.poly.size(); for (int k = 1 ; k != resultSize; k++) temp2.poly.pop_front(); } temp = temp - temp2 * P; } else break; } } return Result; } }; The first three are working correctly but division doesn't as it seems the program is in a infinite loop. Final Update After listening to Dave, I finally made it by overloading both / and & to return the quotient and the remainder so thanks a lot everyone for your help and especially you Dave for your great idea! P.S. If anyone wants for me to post these 2 overloaded operator please ask it by commenting on my post (and maybe give a vote up for everyone involved).

    Read the article

  • Polynomial division overloading operator

    - by Vlad
    Ok. here's the operations i successfully code so far thank's to your help: Adittion: polinom operator+(const polinom& P) const { polinom Result; constIter i = poly.begin(), j = P.poly.begin(); while (i != poly.end() && j != P.poly.end()) { //logic while both iterators are valid if (i->pow > j->pow) { //if the current term's degree of the first polynomial is bigger Result.insert(i->coef, i->pow); i++; } else if (j->pow > i->pow) { // if the other polynomial's term degree is bigger Result.insert(j->coef, j->pow); j++; } else { // if both are equal Result.insert(i->coef + j->coef, i->pow); i++; j++; } } //handle the remaining items in each list //note: at least one will be equal to end(), but that loop will simply be skipped while (i != poly.end()) { Result.insert(i->coef, i->pow); ++i; } while (j != P.poly.end()) { Result.insert(j->coef, j->pow); ++j; } return Result; } Subtraction: polinom operator-(const polinom& P) const //fixed prototype re. const-correctness { polinom Result; constIter i = poly.begin(), j = P.poly.begin(); while (i != poly.end() && j != P.poly.end()) { //logic while both iterators are valid if (i->pow > j->pow) { //if the current term's degree of the first polynomial is bigger Result.insert(-(i->coef), i->pow); i++; } else if (j->pow > i->pow) { // if the other polynomial's term degree is bigger Result.insert(-(j->coef), j->pow); j++; } else { // if both are equal Result.insert(i->coef - j->coef, i->pow); i++; j++; } } //handle the remaining items in each list //note: at least one will be equal to end(), but that loop will simply be skipped while (i != poly.end()) { Result.insert(i->coef, i->pow); ++i; } while (j != P.poly.end()) { Result.insert(j->coef, j->pow); ++j; } return Result; } Multiplication: polinom operator*(const polinom& P) const { polinom Result; constIter i, j, lastItem = Result.poly.end(); Iter it1, it2, first, last; int nr_matches; for (i = poly.begin() ; i != poly.end(); i++) { for (j = P.poly.begin(); j != P.poly.end(); j++) Result.insert(i->coef * j->coef, i->pow + j->pow); } Result.poly.sort(SortDescending()); lastItem--; while (true) { nr_matches = 0; for (it1 = Result.poly.begin(); it1 != lastItem; it1++) { first = it1; last = it1; first++; for (it2 = first; it2 != Result.poly.end(); it2++) { if (it2->pow == it1->pow) { it1->coef += it2->coef; nr_matches++; } } nr_matches++; do { last++; nr_matches--; } while (nr_matches != 0); Result.poly.erase(first, last); } if (nr_matches == 0) break; } return Result; } Division(Edited): polinom operator/(const polinom& P) { polinom Result, temp; Iter i = poly.begin(); constIter j = P.poly.begin(); if (poly.size() < 2) { if (i->pow >= j->pow) { Result.insert(i->coef, i->pow - j->pow); *this = *this - Result; } } else { while (true) { if (i->pow >= j->pow) { Result.insert(i->coef, i->pow - j->pow); temp = Result * P; *this = *this - temp; } else break; } } return Result; } The first three are working correctly but division doesn't as it seems the program is in a infinite loop. Update Because no one seems to understand how i thought the algorithm, i'll explain: If the dividend contains only one term, we simply insert the quotient in Result, then we multiply it with the divisor ans subtract it from the first polynomial which stores the remainder. If the polynomial we do this until the second polynomial( P in this case) becomes bigger. I think this algorithm is called long division, isn't it? So based on these, can anyone help me with overloading the / operator correctly for my class? Thanks!

    Read the article

  • Hung JVM consuming 100% CPU

    - by Bogdan
    We have a JAVA server running on Sun JRE 6u20 on Linux 32-bit (CentOS). We use the Server Hotspot with CMS collector with the following options (I've only provided the relevant ones): -Xmx896m -Xss128k -XX:NewSize=384M -XX:MaxPermSize=96m -XX:+UseParNewGC -XX:+UseConcMarkSweepGC Sometimes, after running for a while, the JVM seems to slip into a hung state, whereby even though we don't make any requests to the application, the CPU continues to spin at 100% (we have 8 logical CPUs, so it looks like only one CPU does the spinning). In this state the JVM doesn't respond to SIGHUP signals (kill -3) and we can't connect to it normally with jstack. We CAN connect with "jstack -F", but the output is dodgy (we can see lots of NullPointerExceptions from JStack apparently because it wasn't able to 'walk' some stacks). So the "jstack -F" output seems to be useless. We have run a stack dump from "gdb" though, and we were able to match the thread id that spins the CPU (we found that using "top" with a per-thread view - "H" option) with a thread stack that appears in the gdb result and this is how it looks like: Thread 443 (Thread 0x7e5b90 (LWP 26310)): #0 0x0115ebd3 in CompactibleFreeListSpace::block_size(HeapWord const*) const () from /usr/java/jdk1.6.0_20/jre/lib/i386/server/libjvm.so #1 0x01160ff9 in CompactibleFreeListSpace::prepare_for_compaction(CompactPoint*) () from /usr/java/jdk1.6.0_20/jre/lib/i386/server/libjvm.so #2 0x0123456c in Generation::prepare_for_compaction(CompactPoint*) () from /usr/java/jdk1.6.0_20/jre/lib/i386/server/libjvm.so #3 0x01229b2c in GenCollectedHeap::prepare_for_compaction() () from /usr/java/jdk1.6.0_20/jre/lib/i386/server/libjvm.so #4 0x0122a7fc in GenMarkSweep::invoke_at_safepoint(int, ReferenceProcessor*, bool) () from /usr/java/jdk1.6.0_20/jre/lib/i386/server/libjvm.so #5 0x01186024 in CMSCollector::do_compaction_work(bool) () from /usr/java/jdk1.6.0_20/jre/lib/i386/server/libjvm.so #6 0x011859ee in CMSCollector::acquire_control_and_collect(bool, bool) () from /usr/java/jdk1.6.0_20/jre/lib/i386/server/libjvm.so #7 0x01185705 in ConcurrentMarkSweepGeneration::collect(bool, bool, unsigned int, bool) () from /usr/java/jdk1.6.0_20/jre/lib/i386/server/libjvm.so #8 0x01227f53 in GenCollectedHeap::do_collection(bool, bool, unsigned int, bool, int) () from /usr/java/jdk1.6.0_20/jre/lib/i386/server/libjvm.so #9 0x0115c7b5 in GenCollectorPolicy::satisfy_failed_allocation(unsigned int, bool) () from /usr/java/jdk1.6.0_20/jre/lib/i386/server/libjvm.so #10 0x0122859c in GenCollectedHeap::satisfy_failed_allocation(unsigned int, bool) () from /usr/java/jdk1.6.0_20/jre/lib/i386/server/libjvm.so #11 0x0158a8ce in VM_GenCollectForAllocation::doit() () from /usr/java/jdk1.6.0_20/jre/lib/i386/server/libjvm.so #12 0x015987e6 in VM_Operation::evaluate() () from /usr/java/jdk1.6.0_20/jre/lib/i386/server/libjvm.so #13 0x01597c93 in VMThread::evaluate_operation(VM_Operation*) () from /usr/java/jdk1.6.0_20/jre/lib/i386/server/libjvm.so #14 0x01597f0f in VMThread::loop() () from /usr/java/jdk1.6.0_20/jre/lib/i386/server/libjvm.so #15 0x015979f0 in VMThread::run() () from /usr/java/jdk1.6.0_20/jre/lib/i386/server/libjvm.so #16 0x0145c24e in java_start(Thread*) () from /usr/java/jdk1.6.0_20/jre/lib/i386/server/libjvm.so #17 0x00ccd46b in start_thread () from /lib/libpthread.so.0 #18 0x00bc2dbe in clone () from /lib/libc.so.6 It seems that a JVM thread is spinning while doing some CMS related work. We have checked the memory usage on the box, there seems to be enough memory available and the system is not swapping. Has anyone come across such a situation? Does it look like a JVM bug? UPDATE I've obtained some more information about this problem (it happened again on a server that has been running for more than 7 days). When the JVM entered the "hung" state it stayed like that for 2 hours until the server was manually restarted. We have obtained a core dump of the process and the gc log. We tried to get a heap dump as well, but "jmap" failed. We tried to use jmap -F but then only a 4Mb file was written before the program aborted with an exception (something about the a memory location not being accessible). So far I think the most interesting information comes from the gc log. It seems that the GC logging stopped as well (possibly at the time when the VM thread went into the long loop): 657501.199: [Full GC (System) 657501.199: [CMS: 400352K->313412K(524288K), 2.4024120 secs] 660634K->313412K(878208K), [CMS Perm : 29455K->29320K(68568K)], 2.4026470 secs] [Times: user=2.39 sys=0.01, real=2.40 secs] 657513.941: [GC 657513.941: [ParNew: 314624K->13999K(353920K), 0.0228180 secs] 628036K->327412K(878208K), 0.0230510 secs] [Times: user=0.08 sys=0.00, real=0.02 secs] 657523.772: [GC 657523.772: [ParNew: 328623K->17110K(353920K), 0.0244910 secs] 642036K->330523K(878208K), 0.0247140 secs] [Times: user=0.08 sys=0.00, real=0.02 secs] 657535.473: [GC 657535.473: [ParNew: 331734K->20282K(353920K), 0.0259480 secs] 645147K->333695K(878208K), 0.0261670 secs] [Times: user=0.11 sys=0.00, real=0.02 secs] .... .... 688346.765: [GC [1 CMS-initial-mark: 485248K(524288K)] 515694K(878208K), 0.0343730 secs] [Times: user=0.03 sys=0.00, real=0.04 secs] 688346.800: [CMS-concurrent-mark-start] 688347.964: [CMS-concurrent-mark: 1.083/1.164 secs] [Times: user=2.52 sys=0.09, real=1.16 secs] 688347.964: [CMS-concurrent-preclean-start] 688347.969: [CMS-concurrent-preclean: 0.004/0.005 secs] [Times: user=0.00 sys=0.01, real=0.01 secs] 688347.969: [CMS-concurrent-abortable-preclean-start] CMS: abort preclean due to time 688352.986: [CMS-concurrent-abortable-preclean: 2.351/5.017 secs] [Times: user=3.83 sys=0.38, real=5.01 secs] 688352.987: [GC[YG occupancy: 297806 K (353920 K)]688352.987: [Rescan (parallel) , 0.1815250 secs]688353.169: [weak refs processing, 0.0312660 secs] [1 CMS-remark: 485248K(524288K)] 783055K(878208K), 0.2131580 secs] [Times: user=1.13 sys =0.00, real=0.22 secs] 688353.201: [CMS-concurrent-sweep-start] 688353.903: [CMS-concurrent-sweep: 0.660/0.702 secs] [Times: user=0.91 sys=0.07, real=0.70 secs] 688353.903: [CMS-concurrent-reset-start] 688353.912: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] 688354.243: [GC 688354.243: [ParNew: 344928K->30151K(353920K), 0.0305020 secs] 681955K->368044K(878208K), 0.0308880 secs] [Times: user=0.15 sys=0.00, real=0.03 secs] .... .... 688943.029: [GC 688943.029: [ParNew: 336531K->17143K(353920K), 0.0237360 secs] 813250K->494327K(878208K), 0.0241260 secs] [Times: user=0.10 sys=0.00, real=0.03 secs] 688950.620: [GC 688950.620: [ParNew: 331767K->22442K(353920K), 0.0344110 secs] 808951K->499996K(878208K), 0.0347690 secs] [Times: user=0.11 sys=0.00, real=0.04 secs] 688956.596: [GC 688956.596: [ParNew: 337064K->37809K(353920K), 0.0488170 secs] 814618K->515896K(878208K), 0.0491550 secs] [Times: user=0.18 sys=0.04, real=0.05 secs] 688961.470: [GC 688961.471: [ParNew (promotion failed): 352433K->332183K(353920K), 0.1862520 secs]688961.657: [CMS I suspect this problem has something to do with the last line in the log (I've added some "...." in order to skip some lines that were not interesting). The fact that the server stayed in the hung state for 2 hours (probably trying to GC and compact the old generation) seems quite strange to me. Also, the gc log stops suddenly with that message and nothing else gets printed any more, probably because the VM Thread gets into some sort of infinite loop (or something that takes 2+ hours).

    Read the article

  • Avoiding Agnostic Jagged Array Flattening in Powershell

    - by matejhowell
    Hello, I'm running into an interesting problem in Powershell, and haven't been able to find a solution to it. When I google (and find things like this post), nothing quite as involved as what I'm trying to do comes up, so I thought I'd post the question here. The problem has to do with multidimensional arrays with an outer array length of one. It appears Powershell is very adamant about flattening arrays like @( @('A') ) becomes @( 'A' ). Here is the first snippet (prompt is , btw): > $a = @( @( 'Test' ) ) > $a.gettype().isarray True > $a[0].gettype().isarray False So, I'd like to have $a[0].gettype().isarray be true, so that I can index the value as $a[0][0] (the real world scenario is processing dynamic arrays inside of a loop, and I'd like to get the values as $a[$i][$j], but if the inner item is not recognized as an array but as a string (in my case), you start indexing into the characters of the string, as in $a[0][0] -eq 'T'). I have a couple of long code examples, so I have posted them at the end. And, for reference, this is on Windows 7 Ultimate with PSv2 and PSCX installed. Consider code example 1: I build a simple array manually using the += operator. Intermediate array $w is flattened, and consequently is not added to the final array correctly. I have found solutions online for similar problems, which basically involve putting a comma before the inner array to force the outer array to not flatten, which does work, but again, I'm looking for a solution that can build arrays inside a loop (a jagged array of arrays, processing a CSS file), so if I add the leading comma to the single element array (implemented as intermediate array $y), I'd like to do the same for other arrays (like $z), but that adversely affects how $z is added to the final array. Now consider code example 2: This is closer to the actual problem I am having. When a multidimensional array with one element is returned from a function, it is flattened. It is correct before it leaves the function. And again, these are examples, I'm really trying to process a file without having to know if the function is going to come back with @( @( 'color', 'black') ) or with @( @( 'color', 'black'), @( 'background-color', 'white') ) Has anybody encountered this, and has anybody resolved this? I know I can instantiate framework objects, and I'm assuming everything will be fine if I create an object[], or a list<, or something else similar, but I've been dealing with this for a little bit and something sure seems like there has to be a right way to do this (without having to instantiate true framework objects). Code Example 1 function Display($x, [int]$indent, [string]$title) { if($title -ne '') { write-host "$title`: " -foregroundcolor cyan -nonewline } if(!$x.GetType().IsArray) { write-host "'$x'" -foregroundcolor cyan } else { write-host '' $s = new-object string(' ', $indent) for($i = 0; $i -lt $x.length; $i++) { write-host "$s[$i]: " -nonewline -foregroundcolor cyan Display $x[$i] $($indent+1) } } if($title -ne '') { write-host '' } } ### Start Program $final = @( @( 'a', 'b' ), @('c')) Display $final 0 'Initial Value' ### How do we do this part ??? ########### ## $w = @( @('d', 'e') ) ## $x = @( @('f', 'g'), @('h') ) ## # But now $w is flat, $w.length = 2 ## ## ## # Even if we put a leading comma (,) ## # in front of the array, $y will work ## # but $w will not. This can be a ## # problem inside a loop where you don't ## # know the length of the array, and you ## # need to put a comma in front of ## # single- and multidimensional arrays. ## $y = @( ,@('D', 'E') ) ## $z = @( ,@('F', 'G'), @('H') ) ## ## ## ########################################## $final += $w $final += $x $final += $y $final += $z Display $final 0 'Final Value' ### Desired final value: @( @('a', 'b'), @('c'), @('d', 'e'), @('f', 'g'), @('h'), @('D', 'E'), @('F', 'G'), @('H') ) ### As in the below: # # Initial Value: # [0]: # [0]: 'a' # [1]: 'b' # [1]: # [0]: 'c' # # Final Value: # [0]: # [0]: 'a' # [1]: 'b' # [1]: # [0]: 'c' # [2]: # [0]: 'd' # [1]: 'e' # [3]: # [0]: 'f' # [1]: 'g' # [4]: # [0]: 'h' # [5]: # [0]: 'D' # [1]: 'E' # [6]: # [0]: 'F' # [1]: 'G' # [7]: # [0]: 'H' Code Example 2 function Display($x, [int]$indent, [string]$title) { if($title -ne '') { write-host "$title`: " -foregroundcolor cyan -nonewline } if(!$x.GetType().IsArray) { write-host "'$x'" -foregroundcolor cyan } else { write-host '' $s = new-object string(' ', $indent) for($i = 0; $i -lt $x.length; $i++) { write-host "$s[$i]: " -nonewline -foregroundcolor cyan Display $x[$i] $($indent+1) } } if($title -ne '') { write-host '' } } function funA() { $ret = @() $temp = @(0) $temp[0] = @('p', 'q') $ret += $temp Display $ret 0 'Inside Function A' return $ret } function funB() { $ret = @( ,@('r', 's') ) Display $ret 0 'Inside Function B' return $ret } ### Start Program $z = funA Display $z 0 'Return from Function A' $z = funB Display $z 0 'Return from Function B' ### Desired final value: @( @('p', 'q') ) and same for r,s ### As in the below: # # Inside Function A: # [0]: # [0]: 'p' # [1]: 'q' # # Return from Function A: # [0]: # [0]: 'p' # [1]: 'q' Thanks, Matt

    Read the article

  • ORA-06502: PL/SQL: numeric or value error: character string buffer too small with Oracle aggregate f

    - by Tunde
    Good day gurus, I have a script that populates tables on a regular basis that crashed and gave the above error. The strange thing is that it has been running for close to 3 months on the production system with no problems and suddenly crashed last week. There has not been any changes on the tables as far as I know. Has anyone encountered something like this before? I believe it has something to do with the aggregate functions I'm implementing in it; but it worked initially. please; kindly find attached the part of the script I've developed into a procedure that I reckon gives the error. CREATE OR REPLACE PROCEDURE V1 IS --DECLARE v_a VARCHAR2(4000); v_b VARCHAR2(4000); v_c VARCHAR2(4000); v_d VARCHAR2(4000); v_e VARCHAR2(4000); v_f VARCHAR2(4000); v_g VARCHAR2(4000); v_h VARCHAR2(4000); v_i VARCHAR2(4000); v_j VARCHAR2(4000); v_k VARCHAR2(4000); v_l VARCHAR2(4000); v_m VARCHAR2(4000); v_n NUMBER(10); v_o VARCHAR2(4000); -- -- Procedure that populates DEMO table BEGIN -- Delete all from the DEMO table DELETE FROM DEMO; -- Populate fields in DEMO from DEMOV1 INSERT INTO DEMO(ID, D_ID, CTR_ID, C_ID, DT_NAM, TP, BYR, ENY, ONG, SUMM, DTW, REV, LD, MD, STAT, CRD) SELECT ID, D_ID, CTR_ID, C_ID, DT_NAM, TP, TO_NUMBER(TO_CHAR(BYR,'YYYY')), TO_NUMBER(TO_CHAR(NVL(ENY,SYSDATE),'YYYY')), CASE WHEN ENY IS NULL THEN 'Y' ELSE 'N' END, SUMMARY, DTW, REV, LD, MD, '1', SYSDATE FROM DEMOV1; -- LOOP THROUGH DEMO TABLE FOR j IN (SELECT ID, CTR_ID, C_ID FROM DEMO) LOOP Select semic_concat(TXTDESC) INTO v_a From GEOT WHERE ID = j.ID; SELECT COUNT(*) INTO v_n FROM MERP M, PROJ P WHERE M.MID = P.COD AND ID = j.ID AND PROAC IS NULL; IF (v_n > 0) THEN Select semic_concat(PRO) INTO v_b FROM MERP M, PROJ P WHERE M.MID = P.COD AND ID = j.ID; ELSE Select semic_concat(PRO || '(' || PROAC || ')' ) INTO v_b FROM MERP M, PROJ P WHERE M.MID = P.COD AND ID = j.ID; END IF; Select semic_concat(VOCNAME('P02',COD)) INTO v_c From PAR WHERE ID = j.ID; Select semic_concat(VOCNAME('L05',COD)) INTO v_d From INST WHERE ID = j.ID; Select semic_concat(NVL(AUTHOR,'Anon') ||' ('||to_char(PUB,'YYYY')||') '||TITLE||', '||EDT) INTO v_e From REFE WHERE ID = j.ID; Select semic_concat(NAM) INTO v_f FROM EDM E, EDO EO WHERE E.EDMID = EO.EDOID AND ID = j.ID; Select semic_concat(VOCNAME('L08', COD)) INTO v_g FROM AVA WHERE ID = j.ID; SELECT or_concat(NAM) INTO v_o FROM CON WHERE ID = j.ID AND NAM = 'Unknown'; IF (v_o = 'Unknown') THEN Select or_concat(JOBTITLE || ' (' || EMAIL || ')') INTO v_h FROM CON WHERE ID = j.ID; ELSE Select or_concat(NAM || ' (' || EMAIL || ')') INTO v_h FROM CON WHERE ID = j.ID; END IF; Select commaencap_concat(COD) INTO v_i FROM PAR WHERE ID = j.ID; IF (v_i = ',') THEN v_i := null; ELSE Select commaencap_concat(COD) INTO v_i FROM PAR WHERE ID = j.ID; END IF; Select commaencap_concat(COD) INTO v_j FROM INST WHERE ID = j.ID; IF (v_j = ',') THEN v_j := null; ELSE Select commaencap_concat(COD) INTO v_j FROM INST WHERE ID = j.ID; END IF; Select commaencap_concat(COD) INTO v_k FROM SAR WHERE ID = j.ID; IF (v_k = ',') THEN v_k := null; ELSE Select commaencap_concat(COD) INTO v_k FROM SAR WHERE ID = j.ID; END IF; Select commaencap_concat(CONID) INTO v_l FROM CON WHERE ID = j.ID; IF (v_l = ',') THEN v_l := null; ELSE Select commaencap_concat(CONID) INTO v_l FROM CON WHERE ID = j.ID; END IF; Select commaencap_concat(PROID) INTO v_m FROM PRO WHERE ID = j.ID; IF (v_m = ',') THEN v_m := null; ELSE Select commaencap_concat(PROID) INTO v_m FROM PRO WHERE ID = j.ID; END IF; -- UPDATE DEMO TABLE UPDATE DEMO SET GEOC = v_a, PRO = v_b, PAR = v_c, INS = v_d, REFER = v_e, ORGR = v_f, AVAY = v_g, CON = v_h, DTH = v_i, INST = v_j, SA = v_k, CC = v_l, EDPR = v_m, CTR = (SELECT NAM FROM EDM WHERE EDMID = j.CTR_ID), COLL = (SELECT NAM FROM EDM WHERE EDMID = j.C_ID) WHERE ID = j.ID; END LOOP; END V1; / The aggregate functions, commaencap_concat (encapsulates with a comma), or_concat (concats with an or) and semic_concat(concats with a semi-colon). the remaining tables used are all linked to the main table DEMO. I have checked the column sizes and there seems to be no problem. I tried executing the SELECT statements alone and they give the same error without populating the tables. Any clues? Many thanks for your anticipated support.

    Read the article

  • Javascript: Make a static code, dynamic - List of inputs

    - by BoDiE2003
    I have this code, that checks some ids and enable others, the javascript is pretty clear about what it does, but since it corresponds to some specific id ranges, I cant do just a look until it finishes, but I'm looking a way to do this dynamic and save 40 lines of code (or more), since its not the best way. function loopGroup1() { var a = 0; do { $$('.selectedAuthorities-3_' + a).each(function(chk1) { // watch for clicks chk1.observe('click', function(evt) { dynamicCheckbox1(); }); dynamicCheckbox1(); }); a++; } while (a < 4); } function dynamicCheckbox1() { // count how many of group_first are checked, // doEnable true if any are checked var doEnable = ($$('.selectedAuthorities-3_0:checked').length > 0) ? true : false; var doEnable1 = ($$('.selectedAuthorities-3_1:checked').length > 0) ? true : false; var doEnable2 = ($$('.selectedAuthorities-3_2:checked').length > 0) ? true : false; // for each in group_second, enable the checkbox, and // remove the cssDisabled class from the parent label var i = 0; do { $$('.selectedAuthorities-4_' + i).each(function(item) { if (doEnable || doEnable1 || doEnable2) { item.enable().up('li').removeClassName('cssDisabled'); } else { item.disable().up('li').addClassName('cssDisabled'); } }); i++; } while (i < 4); }; /* * * Loop Group 2 * * */ function loopGroup2() { var a = 0; do { $$('.selectedAuthorities-5_' + a).each(function(chk1) { // watch for clicks chk1.observe('click', function(evt) { dynamicCheckbox2(); }); dynamicCheckbox2(); }); a++; } while (a < 4); } function dynamicCheckbox2() { // count how many of group_first are checked, // doEnable true if any are checked var doEnable3 = ($$('.selectedAuthorities-5_0:checked').length > 0) ? true : false; // for each in group_second, enable the checkbox, and // remove the cssDisabled class from the parent label var i = 0; do { $$('.selectedAuthorities-6_' + i).each(function(item) { if (doEnable3) { item.enable().up('li').removeClassName('cssDisabled'); } else { item.disable().up('li').addClassName('cssDisabled'); } }); i++; } while (i < 4); }; /* * * Loop Group 3 * * */ function loopGroup3() { var a = 0; do { $$('.selectedAuthorities-6_' + a).each(function(chk1) { // watch for clicks chk1.observe('click', function(evt) { dynamicCheckbox3(); }); dynamicCheckbox3(); }); a++; } while (a < 4); } function dynamicCheckbox3() { // count how many of group_first are checked, // doEnable true if any are checked var doEnable4 = ($$('.selectedAuthorities-6_0:checked').length > 0) ? true : false; var doEnable5 = ($$('.selectedAuthorities-6_1:checked').length > 0) ? true : false; // for each in group_second, enable the checkbox, and // remove the cssDisabled class from the parent label var i = 0; do { $$('.selectedAuthorities-7_' + i).each(function(item) { if (doEnable4 || doEnable5) { item.enable().up('li').removeClassName('cssDisabled'); } else { item.disable().up('li').addClassName('cssDisabled'); } }); i++; } while (i < 4); }; /* * * Loop Group 4 * * */ function loopGroup4() { var a = 0; do { $$('.selectedAuthorities-9_' + a).each(function(chk1) { // watch for clicks chk1.observe('click', function(evt) { dynamicCheckbox4(); }); dynamicCheckbox4(); }); a++; } while (a < 4); } function dynamicCheckbox4() { // count how many of group_first are checked, // doEnable true if any are checked var doEnable6 = ($$('.selectedAuthorities-9_0:checked').length > 0) ? true : false; var doEnable7 = ($$('.selectedAuthorities-9_1:checked').length > 0) ? true : false; // for each in group_second, enable the checkbox, and // remove the cssDisabled class from the parent label var i = 0; do { $$('.selectedAuthorities-10_' + i).each(function(item) { if (doEnable6 || doEnable7) { item.enable().up('li').removeClassName('cssDisabled'); } else { item.disable().up('li').addClassName('cssDisabled'); } }); i++; } while (i < 4); };

    Read the article

  • timer_getoverrun() doesn't behave as expected when using sleep()

    - by dlp
    Here is a program that uses a POSIX per-process timer alongside the sleep subroutine. The signal used by the timer has been set to SIGUSR1 rather than SIGALRM, since SIGALRM may be used internally by sleep, but it still doesn't seem to work. I have run the program using the command line timer-overruns -d 1 -n 10000000 (1 cs interval) so, in theory, we should expect 100 overruns between calls to sigwaitinfo. However, timer_getoverrun returns 0. I have also tried a version using a time-consuming for loop to introduce the delay. In this case, overruns are recorded. Does anyone know why this happens? I am running a 3.4 Linux kernel. Program source /* * timer-overruns.c */ #include <unistd.h> #include <stdlib.h> #include <stdio.h> #include <signal.h> #include <time.h> // Signal to be used for timer expirations #define TIMER_SIGNAL SIGUSR1 int main(int argc, char **argv) { int opt; int d = 0; int r = 0; // Repeat indefinitely struct itimerspec its; its.it_interval.tv_sec = 0; its.it_interval.tv_nsec = 0; // Parse arguments while ((opt = getopt(argc, argv, "d:r:s:n:")) != -1) { switch (opt) { case 'd': // Delay before calling sigwaitinfo() d = atoi(optarg); break; case 'r': // Number of times to call sigwaitinfo() r = atoi(optarg); break; case 's': // Timer interval (seconds) its.it_interval.tv_sec = its.it_value.tv_sec = atoi(optarg); break; case 'n': // Timer interval (nanoseconds) its.it_interval.tv_nsec = its.it_value.tv_nsec = atoi(optarg); break; default: /* '?' */ fprintf(stderr, "Usage: %s [-d signal_accept_delay] [-r repetitions] [-s interval_seconds] [-n interval_nanoseconds]\n", argv[0]); exit(EXIT_FAILURE); } } // Check sanity of command line arguments short e = 0; if (d < 0) { fprintf(stderr, "Delay (-d) cannot be negative!\n"); e++; } if (r < 0) { fprintf(stderr, "Number of repetitions (-r) cannot be negative!\n"); e++; } if (its.it_interval.tv_sec < 0) { fprintf(stderr, "Interval seconds value (-s) cannot be negative!\n"); e++; } if (its.it_interval.tv_nsec < 0) { fprintf(stderr, "Interval nanoseconds value (-n) cannot be negative!\n"); e++; } if (its.it_interval.tv_nsec > 999999999) { fprintf(stderr, "Interval nanoseconds value (-n) must be < 1 second.\n"); e++; } if (e > 0) exit(EXIT_FAILURE); // Set default values if not specified if (its.it_interval.tv_sec == 0 && its.it_interval.tv_nsec == 0) { its.it_interval.tv_sec = its.it_value.tv_sec = 1; its.it_value.tv_nsec = 0; } printf("Running with timer delay %d.%09d seconds\n", (int) its.it_interval.tv_sec, (int) its.it_interval.tv_nsec); // Will be waiting for signals synchronously, so block the one in use. sigset_t sigset; sigemptyset(&sigset); sigaddset(&sigset, TIMER_SIGNAL); sigprocmask(SIG_BLOCK, &sigset, NULL ); // Create and arm the timer struct sigevent sev; timer_t timer; sev.sigev_notify = SIGEV_SIGNAL; sev.sigev_signo = TIMER_SIGNAL; sev.sigev_value.sival_ptr = timer; timer_create(CLOCK_REALTIME, &sev, &timer); timer_settime(timer, TIMER_ABSTIME, &its, NULL ); // Signal handling loop int overruns; siginfo_t si; // Make the loop infinite if r = 0 if (r == 0) r = -1; while (r != 0) { // Sleeping should cause overruns if (d > 0) sleep(d); sigwaitinfo(&sigset, &si); // Check that the signal is from the timer if (si.si_code != SI_TIMER) continue; overruns = timer_getoverrun(timer); if (overruns > 0) { printf("Timer overrun occurred for %d expirations.\n", overruns); } // Decrement r if not repeating indefinitely if (r > 0) r--; } return EXIT_SUCCESS; }

    Read the article

  • Having trouble comparing a range of dates entered in a form with records in a mySQL database

    - by Andrew Fox
    I have a table called schedule and a column called Date where the column type is date. In that column I have a range of dates, which is currently from 2012-11-01 to 2012-11-30. I have a small form where the user can enter a range of dates (input names from and to) and I want to be able to compare the range of dates with the dates currently in the database. This is what I have: //////////////////////////////////////////////////////// //////First set the date range that we want to use////// //////////////////////////////////////////////////////// if(isset($_POST['from']) && ($_POST['from'] != NULL)) { $startDate = $_POST['from']; } else { //Default date is Today $startDate = date("Y-m-d"); } if(isset($_POST['to']) && ($_POST['to'] != NULL)) { $endDate = $_POST['to']; } else { //Default day is one month from today $endDate = date("Y-m-d", strtotime("+1 month")); } ////////////////////////////////////////////////////////////////////////////////////// //////Next calculate the total amount of days selected above to use as a limiter////// ////////////////////////////////////////////////////////////////////////////////////// $dayStart = strtotime($startDate); $dayEnd = strtotime($endDate); $total_days = abs($dayEnd - $dayStart) / 86400 +1; echo "Start Date: " . $startDate . "<br>End Date: " . $endDate . "<br>"; echo "Day Start: " . $dayStart . "<br>Day End: " . $dayEnd . "<br>"; echo "Total Days: " . $total_days . "<br>"; //////////////////////////////////////////////////////////////////////////////////// //////Then we're going to see if the dates selected are in the schedule table////// //////////////////////////////////////////////////////////////////////////////////// //Select all of the dates currently in the schedule table between the range selected. $sql = ("SELECT Date FROM schedule WHERE Date BETWEEN '$startDate' AND '$endDate' LIMIT $total_days"); //Run a check on the query to make sure it worked. If it failed then print the error. if(!$result_date_query = $mysqli->query($sql)) { die('There was an error getting the dates from the schedule table [' . $mysqli->error . ']'); } //Set the dates to an array for future use. // $current_dates = $result_date_query->fetch_assoc(); //Loop through the results while a result is being returned. while($row = $result_date_query->fetch_assoc()) { echo "Row: " . $row['Date'] . "<br>"; echo "Start day: " . date('Y-m-d', $dayStart) . "<br>"; //Set this loop to add 1 day to the Start Date until it reaches the End Date for($i = $dayStart; $i <= $dayEnd; $i = strtotime('+1 day', $i)) { $date = date('Y-m-d',$i); echo "Loop Start day: " . date('Y-m-d', $dayStart) . "<br>"; //Run a check to see if any of the dates selected are in the schedule table. if($row['Date'] != $date) { echo "Current Date: " . $row['Date'] . "<br>"; echo "Date: " . $date . "<br>"; echo "It appears as though you've selected some dates that are not in the schedule database.<br>Please correct the issue and try again."; return; } } } //Free the result so something else can use it. $result_date_query->free(); As you can see I've added in some echo statements so I can see what is being produced. From what I can see it looks like my $row['Date'] is not incrementing and staying at the same date. I originally had it set to a variable (currently commented out) but I thought that could be causing problems. I have created the table with dates ranging from 2012-11-01 to 2012-11-15 for testing and entered all of this php code onto phpfiddle.org but I can't get the username provided to connect. Here is the link: PHP Fiddle I'll be reading through the documentation to try and figure out the user connection problem in the meantime, I would really appreciate any direction or advice you can give me.

    Read the article

  • Objects in Java ArrayList don't get updated.

    - by Sbm007
    This is going to be a very long post, hopefully you can understand what I'm talking about and I appreciate any help. Thanks Basically, I've created a personal, non-commercial project (which I don't plan to release) that can read ZIP and RAR files. It can only read the contents in the archive, the folders inside, the files inside the folders and its properties (such as last modified date, last modified time, CRC checksum, uncompressed size, compressed size and file name). It can't extract files either, so it's really a ZIP/RAR viewer if you may. Anyway that's slightly irrelevant to my problem but I thought I'd give you some background info. Now for my problem: I can successfully list all the folders and files inside a ZIP archive, so now I want to take that raw input and link it together in some useful way. I made 2 classes: ArchiveFile (represents a file inside a ZIP) and ArchiveFolder (represents a folder inside a ZIP). They both have some useful methods such as getLastModifiedDate, getName, getPath and so on. But the difference is that ArchiveFolder can hold an ArrayList of ArchiveFile's and additional ArchiveFolder's (think of this as files and folders inside a folder). Now I want to populate my raw input into one root ArchiveFolder, which will have all the files in the root dir of the ZIP in the ArchiveFile's ArrayList and any additional folders in the root dir of the ZIP in the ArchiveFolder's ArrayList (and this process can continue on like this like a chain reaction (more files/folders in that ArchiveFolder etc etc). So I came up with the following code: while (archive.hasMore()) { String path = ""; ArchiveFolder current = root; String[] contents = archive.getName().split("/"); for (int x = 0; x < contents.length; ++x) { if (x == (contents.length - 1) && !archive.getName().endsWith("/")) { // If on last item and item is a file path += contents[x]; // Update final path ArchiveFile file = new ArchiveFile(path, contents[x], archive.getUncompressedSize(), archive.getCompressedSize(), archive.getModifiedTime(), archive.getModifiedDate(), archive.getCRC()); current.addFile(file); // Create and add the file to the current ArchiveFolder } else if (x == (contents.length - 1)) { // Else if we are on last item and it is a folder path += contents[x] + "/"; // Update final path ArchiveFolder folder = new ArchiveFolder(path, contents[x], archive.getModifiedTime(), archive.getModifiedDate()); current.addFolder(folder); // Create and add this folder to the current ArchiveFile } else { // Else if we are still traversing through the path path += contents[x] + "/"; // Update path ArchiveFolder folder = new ArchiveFolder(path, contents[x]); current.addFolder(folder); // Create and add folder (remember we do not know the modified date/time as all we know is the path, so we can deduce the name only) current = folder; // Update current ArchiveFolder to the newly created one for the next iteration of the for loop } } archive.getNext(); } Assume that root is the root ArchiveFolder (initially empty). And that archive.getName() returns the name of the current file OR folder in the following fashion: file.txt or folder1/file2.txt or folder4/folder2/ (this is a empty folder) etc. So basically the relative path from the root of the ZIP archive. Please read through the comments in the above code to familiarize yourself with it. Also assume that the addFolder method in an ArchiveFile, only adds the folder if it doesn't exist already (so there are no multiple folders) and it also updates the time and date of an existing folder if it is blank (ie it was a intermediate folder we only knew the name of, but now we know its details). The code for addFolder is (pretty self-explanitory): public void addFolder(ArchiveFolder folder) { int loc = folders.indexOf(folder); // folders is the ArrayList containing ArchiveFolder's if (loc == -1) { folders.add(folder); } else { ArchiveFolder real = folders.get(loc); if (real.time == null) { real.setTime(folder.getTime()); real.setDate(folder.getDate()); } } } So I can't see anything wrong with the code, it works and after finishing, the root ArchiveFolder contains all the files in the root of the ZIP as I want it to, and it contains all the direcories in the root folder as I want it to. So you'd think it works as expected, but no the ArchiveFolder's in the root folder don't contain the data inside those 'child' folders, it's just a blank folder with no additional files and folders (while it does really contain some more files/folders when viewed in WinZip). After debugging using Eclipse, the for loop does iterate through all the files (even those not included above), so this led me to believe that there is a problem with this line of the code: current = folder; What it does is, it updates the current folder (used as an intermediate by the loop) to the newly added folder. I thought Java passed by reference and thus all new operations and new additions in future ArchiveFile's and ArchiveFolder's are automatically updated, and parent ArchiveFolder's will be updated accordingly. But that does not appear to be the case? I know this is a long ass post and I really hope anyone can help me out with this. Thanks in advance.

    Read the article

  • Table sorting & pagination with jQuery and Razor in ASP.NET MVC

    - by hajan
    Introduction jQuery enjoys living inside pages which are built on top of ASP.NET MVC Framework. The ASP.NET MVC is a place where things are organized very well and it is quite hard to make them dirty, especially because the pattern enforces you on purity (you can still make it dirty if you want so ;) ). We all know how easy is to build a HTML table with a header row, footer row and table rows showing some data. With ASP.NET MVC we can do this pretty easy, but, the result will be pure HTML table which only shows data, but does not includes sorting, pagination or some other advanced features that we were used to have in the ASP.NET WebForms GridView. Ok, there is the WebGrid MVC Helper, but what if we want to make something from pure table in our own clean style? In one of my recent projects, I’ve been using the jQuery tablesorter and tablesorter.pager plugins that go along. You don’t need to know jQuery to make this work… You need to know little CSS to create nice design for your table, but of course you can use mine from the demo… So, what you will see in this blog is how to attach this plugin to your pure html table and a div for pagination and make your table with advanced sorting and pagination features.   Demo Project Resources The resources I’m using for this demo project are shown in the following solution explorer window print screen: Content/images – folder that contains all the up/down arrow images, pagination buttons etc. You can freely replace them with your own, but keep the names the same if you don’t want to change anything in the CSS we will built later. Content/Site.css – The main css theme, where we will add the theme for our table too Controllers/HomeController.cs – The controller I’m using for this project Models/Person.cs – For this demo, I’m using Person.cs class Scripts – jquery-1.4.4.min.js, jquery.tablesorter.js, jquery.tablesorter.pager.js – required script to make the magic happens Views/Home/Index.cshtml – Index view (razor view engine) the other items are not important for the demo. ASP.NET MVC 1. Model In this demo I use only one Person class which defines Person entity with several properties. You can use your own model, maybe one which will access data from database or any other resource. Person.cs public class Person {     public string Name { get; set; }     public string Surname { get; set; }     public string Email { get; set; }     public int? Phone { get; set; }     public DateTime? DateAdded { get; set; }     public int? Age { get; set; }     public Person(string name, string surname, string email,         int? phone, DateTime? dateadded, int? age)     {         Name = name;         Surname = surname;         Email = email;         Phone = phone;         DateAdded = dateadded;         Age = age;     } } 2. View In our example, we have only one Index.chtml page where Razor View engine is used. Razor view engine is my favorite for ASP.NET MVC because it’s very intuitive, fluid and keeps your code clean. 3. Controller Since this is simple example with one page, we use one HomeController.cs where we have two methods, one of ActionResult type (Index) and another GetPeople() used to create and return list of people. HomeController.cs public class HomeController : Controller {     //     // GET: /Home/     public ActionResult Index()     {         ViewBag.People = GetPeople();         return View();     }     public List<Person> GetPeople()     {         List<Person> listPeople = new List<Person>();                  listPeople.Add(new Person("Hajan", "Selmani", "[email protected]", 070070070,DateTime.Now, 25));                     listPeople.Add(new Person("Straight", "Dean", "[email protected]", 123456789, DateTime.Now.AddDays(-5), 35));         listPeople.Add(new Person("Karsen", "Livia", "[email protected]", 46874651, DateTime.Now.AddDays(-2), 31));         listPeople.Add(new Person("Ringer", "Anne", "[email protected]", null, DateTime.Now, null));         listPeople.Add(new Person("O'Leary", "Michael", "[email protected]", 32424344, DateTime.Now, 44));         listPeople.Add(new Person("Gringlesby", "Anne", "[email protected]", null, DateTime.Now.AddDays(-9), 18));         listPeople.Add(new Person("Locksley", "Stearns", "[email protected]", 2135345, DateTime.Now, null));         listPeople.Add(new Person("DeFrance", "Michel", "[email protected]", 235325352, DateTime.Now.AddDays(-18), null));         listPeople.Add(new Person("White", "Johnson", null, null, DateTime.Now.AddDays(-22), 55));         listPeople.Add(new Person("Panteley", "Sylvia", null, 23233223, DateTime.Now.AddDays(-1), 32));         listPeople.Add(new Person("Blotchet-Halls", "Reginald", null, 323243423, DateTime.Now, 26));         listPeople.Add(new Person("Merr", "South", "[email protected]", 3232442, DateTime.Now.AddDays(-5), 85));         listPeople.Add(new Person("MacFeather", "Stearns", "[email protected]", null, DateTime.Now, null));         return listPeople;     } }   TABLE CSS/HTML DESIGN Now, lets start with the implementation. First of all, lets create the table structure and the main CSS. 1. HTML Structure @{     Layout = null;     } <!DOCTYPE html> <html> <head>     <title>ASP.NET & jQuery</title>     <!-- referencing styles, scripts and writing custom js scripts will go here --> </head> <body>     <div>         <table class="tablesorter">             <thead>                 <tr>                     <th> value </th>                 </tr>             </thead>             <tbody>                 <tr>                     <td>value</td>                 </tr>             </tbody>             <tfoot>                 <tr>                     <th> value </th>                 </tr>             </tfoot>         </table>         <div id="pager">                      </div>     </div> </body> </html> So, this is the main structure you need to create for each of your tables where you want to apply the functionality we will create. Of course the scripts are referenced once ;). As you see, our table has class tablesorter and also we have a div with id pager. In the next steps we will use both these to create the needed functionalities. The complete Index.cshtml coded to get the data from controller and display in the page is: <body>     <div>         <table class="tablesorter">             <thead>                 <tr>                     <th>Name</th>                     <th>Surname</th>                     <th>Email</th>                     <th>Phone</th>                     <th>Date Added</th>                 </tr>             </thead>             <tbody>                 @{                     foreach (var p in ViewBag.People)                     {                                 <tr>                         <td>@p.Name</td>                         <td>@p.Surname</td>                         <td>@p.Email</td>                         <td>@p.Phone</td>                         <td>@p.DateAdded</td>                     </tr>                     }                 }             </tbody>             <tfoot>                 <tr>                     <th>Name</th>                     <th>Surname</th>                     <th>Email</th>                     <th>Phone</th>                     <th>Date Added</th>                 </tr>             </tfoot>         </table>         <div id="pager" style="position: none;">             <form>             <img src="@Url.Content("~/Content/images/first.png")" class="first" />             <img src="@Url.Content("~/Content/images/prev.png")" class="prev" />             <input type="text" class="pagedisplay" />             <img src="@Url.Content("~/Content/images/next.png")" class="next" />             <img src="@Url.Content("~/Content/images/last.png")" class="last" />             <select class="pagesize">                 <option selected="selected" value="5">5</option>                 <option value="10">10</option>                 <option value="20">20</option>                 <option value="30">30</option>                 <option value="40">40</option>             </select>             </form>         </div>     </div> </body> So, mainly the structure is the same. I have added @Razor code to create table with data retrieved from the ViewBag.People which has been filled with data in the home controller. 2. CSS Design The CSS code I’ve created is: /* DEMO TABLE */ body {     font-size: 75%;     font-family: Verdana, Tahoma, Arial, "Helvetica Neue", Helvetica, Sans-Serif;     color: #232323;     background-color: #fff; } table { border-spacing:0; border:1px solid gray;} table.tablesorter thead tr .header {     background-image: url(images/bg.png);     background-repeat: no-repeat;     background-position: center right;     cursor: pointer; } table.tablesorter tbody td {     color: #3D3D3D;     padding: 4px;     background-color: #FFF;     vertical-align: top; } table.tablesorter tbody tr.odd td {     background-color:#F0F0F6; } table.tablesorter thead tr .headerSortUp {     background-image: url(images/asc.png); } table.tablesorter thead tr .headerSortDown {     background-image: url(images/desc.png); } table th { width:150px;            border:1px outset gray;            background-color:#3C78B5;            color:White;            cursor:pointer; } table thead th:hover { background-color:Yellow; color:Black;} table td { width:150px; border:1px solid gray;} PAGINATION AND SORTING Now, when everything is ready and we have the data, lets make pagination and sorting functionalities 1. jQuery Scripts referencing <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> <script src="@Url.Content("~/Scripts/jquery-1.4.4.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.tablesorter.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.tablesorter.pager.js")" type="text/javascript"></script> 2. jQuery Sorting and Pagination script   <script type="text/javascript">     $(function () {         $("table.tablesorter").tablesorter({ widthFixed: true, sortList: [[0, 0]] })         .tablesorterPager({ container: $("#pager"), size: $(".pagesize option:selected").val() });     }); </script> So, with only two lines of code, I’m using both tablesorter and tablesorterPager plugins, giving some options to both these. Options added: tablesorter - widthFixed: true – gives fixed width of the columns tablesorter - sortList[[0,0]] – An array of instructions for per-column sorting and direction in the format: [[columnIndex, sortDirection], ... ] where columnIndex is a zero-based index for your columns left-to-right and sortDirection is 0 for Ascending and 1 for Descending. A valid argument that sorts ascending first by column 1 and then column 2 looks like: [[0,0],[1,0]] (source: http://tablesorter.com/docs/) tablesorterPager – container: $(“#pager”) – tells the pager container, the div with id pager in our case. tablesorterPager – size: the default size of each page, where I get the default value selected, so if you put selected to any other of the options in your select list, you will have this number of rows as default per page for the table too. END RESULTS 1. Table once the page is loaded (default results per page is 5 and is automatically sorted by 1st column as sortList is specified) 2. Sorted by Phone Descending 3. Changed pagination to 10 items per page 4. Sorted by Phone and Name (use SHIFT to sort on multiple columns) 5. Sorted by Date Added 6. Page 3, 5 items per page   ADDITIONAL ENHANCEMENTS We can do additional enhancements to the table. We can make search for each column. I will cover this in one of my next blogs. Stay tuned. DEMO PROJECT You can download demo project source code from HERE.CONCLUSION Once you finish with the demo, run your page and open the source code. You will be amazed of the purity of your code.Working with pagination in client side can be very useful. One of the benefits is performance, but if you have thousands of rows in your tables, you will get opposite result when talking about performance. Hence, sometimes it is nice idea to make pagination on back-end. So, the compromise between both approaches would be best to combine both of them. I use at most up to 500 rows on client-side and once the user reach the last page, we can trigger ajax postback which can get the next 500 rows using server-side pagination of the same data. I would like to recommend the following blog post http://weblogs.asp.net/gunnarpeipman/archive/2010/09/14/returning-paged-results-from-repositories-using-pagedresult-lt-t-gt.aspx, which will help you understand how to return page results from repository. I hope this was helpful post for you. Wait for my next posts ;). Please do let me know your feedback. Best Regards, Hajan

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Multi-tenant ASP.NET MVC – Introduction

    - by zowens
    I’ve read a few different blogs that talk about multi-tenancy and how to resolve some of the issues surrounding multi-tenancy. What I’ve come to realize is that these implementations overcomplicate the issues and give only a muddy implementation! I’ve seen some really illogical code out there. I have recently been building a multi-tenancy framework for internal use at eagleenvision.net. Through this process, I’ve realized a few different techniques to make building multi-tenant applications actually quite easy. I will be posting a few different entries over the issue and my personal implementation. In this first post, I will discuss what multi-tenancy means and how my implementation will be structured.   So what’s the problem? Here’s the deal. Multi-tenancy is basically a technique of code-reuse of web application code. A multi-tenant application is an application that runs a single instance for multiple clients. Here the “client” is different URL bindings on IIS using ASP.NET MVC. The problem with different instances of the, essentially, same application is that you have to spin up different instances of ASP.NET. As the number of running instances of ASP.NET grows, so does the memory footprint of IIS. Stack Exchange shifted its architecture to multi-tenancy March. As the blog post explains, multi-tenancy saves cost in terms of memory utilization and physical disc storage. If you use the same code base for many applications, multi-tenancy just makes sense. You’ll reduce the amount of work it takes to synchronize the site implementations and you’ll thank your lucky stars later for choosing to use one application for multiple sites. Multi-tenancy allows the freedom of extensibility while relying on some pre-built code.   You’d think this would be simple. I have actually seen a real lack of reference material on the subject in terms of ASP.NET MVC. This is somewhat surprising given the number of users of ASP.NET MVC. However, I will certainly fill the void ;). Implementing a multi-tenant application takes a little thinking. It’s not straight-forward because the possibilities of implementation are endless. I have yet to see a great implementation of a multi-tenant MVC application. The only one that comes close to what I have in mind is Rob Ashton’s implementation (all the entries are listed on this page). There’s some really nasty code in there… something I’d really like to avoid. He has also written a library (MvcEx) that attempts to aid multi-tenant development. This code is even worse, in my honest opinion. Once I start seeing Reflection.Emit, I have to assume the worst :) In all seriousness, if his implementation makes sense to you, use it! It’s a fine implementation that should be given a look. At least look at the code. I will reference MvcEx going forward as a comparison to my implementation. I will explain why my approach differs from MvcEx and how it is better or worse (hopefully better).   Core Goals of my Multi-Tenant Implementation The first, and foremost, goal is to use Inversion of Control containers to my advantage. As you will see throughout this series, I pass around containers quite frequently and rely on their use heavily. I will be using StructureMap in my implementation. However, you could probably use your favorite IoC tool instead. <RANT> However, please don’t be stupid and abstract your IoC tool. Each IoC is powerful and by abstracting the capabilities, you’re doing yourself a real disservice. Who in the world swaps out IoC tools…? No one!</RANT> (It had to be said.) I will outline some of the goodness of StructureMap as we go along. This is really an invaluable tool in my tool belt and simple to use in my multi-tenant implementation. The second core goal is to represent a tenant as easily as possible. Just as a dependency container will be a first-class citizen, so will a tenant. This allows us to easily extend and use tenants. This will also allow different ways of “plugging in” tenants into your application. In my implementation, there will be a single dependency container for a single tenant. This will enable isolation of the dependencies of the tenant. The third goal is to use composition as a means to delegate “core” functions out to the tenant. More on this later.   Features In MvcExt, “Modules” are a code element of the infrastructure. I have simplified this concept and have named this “Features”. A feature is a simple element of an application. Controllers can be specified to have a feature and actions can have “sub features”. Each tenant can select features it needs and the other features will be hidden to the tenant’s users. My implementation doesn’t require something to be a feature. A controller can be common to all tenants. For example, (as you will see) I have a “Content” controller that will return the CSS, Javascript and Images for a tenant. This is common logic to all tenants and shouldn’t be hidden or considered a “feature”; Content is a core component.   Up next My next post will be all about the code. I will reveal some of the foundation to the way I do multi-tenancy. I will have posts dedicated to Foundation, Controllers, Views, Caching, Content and how to setup the tenants. Each post will be in-depth about the issues and implementation details, while adhering to my core goals outlined in this post. As always, comment with questions of DM me on twitter or send me an email.

    Read the article

  • Consume WCF Service InProcess using Agatha and WCF

    - by REA_ANDREW
    I have been looking into this lately for a specific reason.  Some integration tests I want to write I want to control the types of instances which are used inside the service layer but I want that control from the test class instance.  One of the problems with just referencing the service is that a lot of the time this will by default be done inside a different process.  I am using StructureMap as my DI of choice and one of the tools which I am using inline with RhinoMocks is StructureMap.AutoMocking.  With StructureMap the main entry point is the ObjectFactory.  This will be process specific so if I decide that the I want a certain instance of a type to be used inside the ServiceLayer I cannot configure the ObjectFactory from my test class as that will only apply to the process which it belongs to. This is were I started thinking about two things: Running a WCF in process Being able to share mocked instances across processes A colleague in work pointed me to a project which is for the latter but I thought that it would be a better solution if I could run the WCF Service in process.  One of the projects which I use when I think about WCF Services is AGATHA, and the one which I have to used to try and get my head around doing this. Another asset I have is a book called Programming WCF Services by Juval Lowy and if you have not heard of it or read it I would definately recommend it.  One of the many topics that is inside this book is the type of configuration you need to communicate with a service in the same process, and it turns out to be quite simple from a config point of view. <system.serviceModel> <services> <service name="Agatha.ServiceLayer.WCF.WcfRequestProcessor"> <endpoint address ="net.pipe://localhost/MyPipe" binding="netNamedPipeBinding" contract="Agatha.Common.WCF.IWcfRequestProcessor"/> </service> </services> <client> <endpoint name="MyEndpoint" address="net.pipe://localhost/MyPipe" binding="netNamedPipeBinding" contract="Agatha.Common.WCF.IWcfRequestProcessor"/> </client> </system.serviceModel>   You can see here that I am referencing the Agatha object and contract here, but also that my binding and the address is something called Named Pipes.  THis is sort of the “Magic” which makes it happen in the same process. Next I need to open the service prior to calling the methods on a proxy which I also need.  My initial attempt at the proxy did not use any Agatha specific coding and one of the pains I found was that you obviously need to give your proxy the known types which the serializer can be aware of.  So we need to add to the known types of the proxy programmatically.  I came across the following blog post which showed me how easy it was http://bloggingabout.net/blogs/vagif/archive/2009/05/18/how-to-programmatically-define-known-types-in-wcf.aspx. First Pass So with this in mind, and inside a console app this was my first pass at consuming a service in process.  First here is the proxy which I made making use of the Agatha IWcfRequestProcessor contract. public class InProcProxy : ClientBase<Agatha.Common.WCF.IWcfRequestProcessor>, Agatha.Common.WCF.IWcfRequestProcessor { public InProcProxy() { } public InProcProxy(string configurationName) : base(configurationName) { } public Agatha.Common.Response[] Process(params Agatha.Common.Request[] requests) { return Channel.Process(requests); } public void ProcessOneWayRequests(params Agatha.Common.OneWayRequest[] requests) { Channel.ProcessOneWayRequests(requests); } } So with the proxy in place I could then use this after opening the service so here is the code which I use inside the console app make the request. static void Main(string[] args) { ComponentRegistration.Register(); ServiceHost serviceHost = new ServiceHost(typeof(Agatha.ServiceLayer.WCF.WcfRequestProcessor)); serviceHost.Open(); Console.WriteLine("Service is running...."); using (var proxy = new InProcProxy()) { foreach (var operation in proxy.Endpoint.Contract.Operations) { foreach (var t in KnownTypeProvider.GetKnownTypes(null)) { operation.KnownTypes.Add(t); } } var request = new GetProductsRequest(); var responses = proxy.Process(new[] { request }); var response = (GetProductsResponse)responses[0]; Console.WriteLine("{0} Products have been retrieved", response.Products.Count); } serviceHost.Close(); Console.WriteLine("Finished"); Console.ReadLine(); } So what I used here is the KnownTypeProvider of Agatha to easily get all the types I need for the service/proxy and add them to the proxy.  My Request handler for this was just a test one which always returned 2 products. public class GetProductsHandler : RequestHandler<GetProductsRequest,GetProductsResponse> { public override Agatha.Common.Response Handle(GetProductsRequest request) { return new GetProductsResponse { Products = new List<ProductDto> { new ProductDto{}, new ProductDto{} } }; } } Second Pass Now after I did this I started reading up some more on some resources including more by Davy Brion and others on Agatha.  Now it turns out that the work I did above to create a derived class of the ClientBase implementing Agatha.Common.WCF.IWcfRequestProcessor was not necessary due to a nice class which is present inside the Agatha code base, RequestProcessorProxy which takes care of this for you! :-) So disregarding that class I made for the proxy and changing my code to use it I am now left with the following: static void Main(string[] args) { ComponentRegistration.Register(); ServiceHost serviceHost = new ServiceHost(typeof(Agatha.ServiceLayer.WCF.WcfRequestProcessor)); serviceHost.Open(); Console.WriteLine("Service is running...."); using (var proxy = new RequestProcessorProxy()) { var request = new GetProductsRequest(); var responses = proxy.Process(new[] { request }); var response = (GetProductsResponse)responses[0]; Console.WriteLine("{0} Products have been retrieved", response.Products.Count); } serviceHost.Close(); Console.WriteLine("Finished"); Console.ReadLine(); }   Cheers for now, Andy References Agatha WCF InProcess Without WCF StructureMap.AutoMocking Cross Process Mocking Agatha Programming WCF Services by Juval Lowy

    Read the article

  • C#: System.Collections.Concurrent.ConcurrentQueue vs. Queue

    - by James Michael Hare
    I love new toys, so of course when .NET 4.0 came out I felt like the proverbial kid in the candy store!  Now, some people get all excited about the IDE and it’s new features or about changes to WPF and Silver Light and yes, those are all very fine and grand.  But me, I get all excited about things that tend to affect my life on the backside of development.  That’s why when I heard there were going to be concurrent container implementations in the latest version of .NET I was salivating like Pavlov’s dog at the dinner bell. They seem so simple, really, that one could easily overlook them.  Essentially they are implementations of containers (many that mirror the generic collections, others are new) that have either been optimized with very efficient, limited, or no locking but are still completely thread safe -- and I just had to see what kind of an improvement that would translate into. Since part of my job as a solutions architect here where I work is to help design, develop, and maintain the systems that process tons of requests each second, the thought of extremely efficient thread-safe containers was extremely appealing.  Of course, they also rolled out a whole parallel development framework which I won’t get into in this post but will cover bits and pieces of as time goes by. This time, I was mainly curious as to how well these new concurrent containers would perform compared to areas in our code where we manually synchronize them using lock or some other mechanism.  So I set about to run a processing test with a series of producers and consumers that would be either processing a traditional System.Collections.Generic.Queue or a System.Collection.Concurrent.ConcurrentQueue. Now, I wanted to keep the code as common as possible to make sure that the only variance was the container, so I created a test Producer and a test Consumer.  The test Producer takes an Action<string> delegate which is responsible for taking a string and placing it on whichever queue we’re testing in a thread-safe manner: 1: internal class Producer 2: { 3: public int Iterations { get; set; } 4: public Action<string> ProduceDelegate { get; set; } 5: 6: public void Produce() 7: { 8: for (int i = 0; i < Iterations; i++) 9: { 10: ProduceDelegate(“Hello”); 11: } 12: } 13: } Then likewise, I created a consumer that took a Func<string> that would read from whichever queue we’re testing and return either the string if data exists or null if not.  Then, if the item doesn’t exist, it will do a 10 ms wait before testing again.  Once all the producers are done and join the main thread, a flag will be set in each of the consumers to tell them once the queue is empty they can shut down since no other data is coming: 1: internal class Consumer 2: { 3: public Func<string> ConsumeDelegate { get; set; } 4: public bool HaltWhenEmpty { get; set; } 5: 6: public void Consume() 7: { 8: bool processing = true; 9: 10: while (processing) 11: { 12: string result = ConsumeDelegate(); 13: 14: if(result == null) 15: { 16: if (HaltWhenEmpty) 17: { 18: processing = false; 19: } 20: else 21: { 22: Thread.Sleep(TimeSpan.FromMilliseconds(10)); 23: } 24: } 25: else 26: { 27: DoWork(); // do something non-trivial so consumers lag behind a bit 28: } 29: } 30: } 31: } Okay, now that we’ve done that, we can launch threads of varying numbers using lambdas for each different method of production/consumption.  First let's look at the lambdas for a typical System.Collections.Generics.Queue with locking: 1: // lambda for putting to typical Queue with locking... 2: var productionDelegate = s => 3: { 4: lock (_mutex) 5: { 6: _mutexQueue.Enqueue(s); 7: } 8: }; 9:  10: // and lambda for typical getting from Queue with locking... 11: var consumptionDelegate = () => 12: { 13: lock (_mutex) 14: { 15: if (_mutexQueue.Count > 0) 16: { 17: return _mutexQueue.Dequeue(); 18: } 19: } 20: return null; 21: }; Nothing new or interesting here.  Just typical locks on an internal object instance.  Now let's look at using a ConcurrentQueue from the System.Collections.Concurrent library: 1: // lambda for putting to a ConcurrentQueue, notice it needs no locking! 2: var productionDelegate = s => 3: { 4: _concurrentQueue.Enqueue(s); 5: }; 6:  7: // lambda for getting from a ConcurrentQueue, once again, no locking required. 8: var consumptionDelegate = () => 9: { 10: string s; 11: return _concurrentQueue.TryDequeue(out s) ? s : null; 12: }; So I pass each of these lambdas and the number of producer and consumers threads to launch and take a look at the timing results.  Basically I’m timing from the time all threads start and begin producing/consuming to the time that all threads rejoin.  I won't bore you with the test code, basically it just launches code that creates the producers and consumers and launches them in their own threads, then waits for them all to rejoin.  The following are the timings from the start of all threads to the Join() on all threads completing.  The producers create 10,000,000 items evenly between themselves and then when all producers are done they trigger the consumers to stop once the queue is empty. These are the results in milliseconds from the ordinary Queue with locking: 1: Consumers Producers 1 2 3 Time (ms) 2: ---------- ---------- ------ ------ ------ --------- 3: 1 1 4284 5153 4226 4554.33 4: 10 10 4044 3831 5010 4295.00 5: 100 100 5497 5378 5612 5495.67 6: 1000 1000 24234 25409 27160 25601.00 And the following are the results in milliseconds from the ConcurrentQueue with no locking necessary: 1: Consumers Producers 1 2 3 Time (ms) 2: ---------- ---------- ------ ------ ------ --------- 3: 1 1 3647 3643 3718 3669.33 4: 10 10 2311 2136 2142 2196.33 5: 100 100 2480 2416 2190 2362.00 6: 1000 1000 7289 6897 7061 7082.33 Note that even though obviously 2000 threads is quite extreme, the concurrent queue actually scales really well, whereas the traditional queue with simple locking scales much more poorly. I love the new concurrent collections, they look so much simpler without littering your code with the locking logic, and they perform much better.  All in all, a great new toy to add to your arsenal of multi-threaded processing!

    Read the article

  • How To Delete Top 100 Rows From SQL Server Tables

    - by Gopinath
    If you want to delete top 100/n records from an SQL Server table, it is very easy with the following query: DELETE FROM MyTable WHERE PK_Column IN(     SELECT TOP 100 PK_Column     FROM MyTable     ORDER BY creation    ) Why Would You Require To Delete Top 100 Records? I often delete a top n records of a table when number of rows in the are too huge. Lets say if I’ve 1000000000 records in a table, deleting 10000 rows at a time in a loop is faster than trying to delete all the 1000000000  at a time. What ever may be reason, if you ever come across a requirement of deleting a bunch of rows at a time, this query will be helpful to you. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Localhost not working after installing PHP on Mountain Lion

    - by zen
    I've installed php using brew install php54 --with-mysql, I've set up all the path correctly. which php will give me /usr/local/bin/php php -v will give me PHP 5.4.8 (cli) (built: Nov 20 2012 09:29:31) php --ini will give me: Configuration File (php.ini) Path: /usr/local/etc/php/5.4 Loaded Configuration File: /usr/local/etc/php/5.4/php.ini Scan for additional .ini files in: /usr/local/etc/php/5.4/conf.d Additional .ini files parsed: (none) apachectl -V | grep httpd.conf will give me -D SERVER_CONFIG_FILE="/private/etc/apache2/httpd.conf" I believe everything is correct, but after I restarted my apache I keep getting error Service Temporarily Unavailable The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later. This is my httpd.conf file: # # This is the main Apache HTTP server configuration file. It contains the # configuration directives that give the server its instructions. # See <URL:http://httpd.apache.org/docs/2.2> for detailed information. # In particular, see # <URL:http://httpd.apache.org/docs/2.2/mod/directives.html> # for a discussion of each configuration directive. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "log/foo_log" # with ServerRoot set to "/usr" will be interpreted by the # server as "/usr/log/foo_log". # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # Do not add a slash at the end of the directory path. If you point # ServerRoot at a non-local disk, be sure to point the LockFile directive # at a local disk. If you wish to share the same ServerRoot for multiple # httpd daemons, you will need to change at least LockFile and PidFile. # ServerRoot "/usr" # # Listen: Allows you to bind Apache to specific IP addresses and/or # ports, instead of the default. See also the <VirtualHost> # directive. # # Change this to Listen on specific IP addresses as shown below to # prevent Apache from glomming onto all bound IP addresses. # #Listen 12.34.56.78:80 Listen 127.0.0.1:80 # # Dynamic Shared Object (DSO) Support # # To be able to use the functionality of a module which was built as a DSO you # have to place corresponding `LoadModule' lines at this location so the # directives contained in it are actually available _before_ they are used. # Statically compiled modules (those listed by `httpd -l') do not need # to be loaded here. # # Example: # LoadModule foo_module modules/mod_foo.so # LoadModule authn_file_module libexec/apache2/mod_authn_file.so LoadModule authn_dbm_module libexec/apache2/mod_authn_dbm.so LoadModule authn_anon_module libexec/apache2/mod_authn_anon.so LoadModule authn_dbd_module libexec/apache2/mod_authn_dbd.so LoadModule authn_default_module libexec/apache2/mod_authn_default.so LoadModule authz_host_module libexec/apache2/mod_authz_host.so LoadModule authz_groupfile_module libexec/apache2/mod_authz_groupfile.so LoadModule authz_user_module libexec/apache2/mod_authz_user.so LoadModule authz_dbm_module libexec/apache2/mod_authz_dbm.so LoadModule authz_owner_module libexec/apache2/mod_authz_owner.so LoadModule authz_default_module libexec/apache2/mod_authz_default.so LoadModule auth_basic_module libexec/apache2/mod_auth_basic.so LoadModule auth_digest_module libexec/apache2/mod_auth_digest.so LoadModule cache_module libexec/apache2/mod_cache.so LoadModule disk_cache_module libexec/apache2/mod_disk_cache.so LoadModule mem_cache_module libexec/apache2/mod_mem_cache.so LoadModule dbd_module libexec/apache2/mod_dbd.so LoadModule dumpio_module libexec/apache2/mod_dumpio.so LoadModule reqtimeout_module libexec/apache2/mod_reqtimeout.so LoadModule ext_filter_module libexec/apache2/mod_ext_filter.so LoadModule include_module libexec/apache2/mod_include.so LoadModule filter_module libexec/apache2/mod_filter.so LoadModule substitute_module libexec/apache2/mod_substitute.so LoadModule deflate_module libexec/apache2/mod_deflate.so LoadModule log_config_module libexec/apache2/mod_log_config.so LoadModule log_forensic_module libexec/apache2/mod_log_forensic.so LoadModule logio_module libexec/apache2/mod_logio.so LoadModule env_module libexec/apache2/mod_env.so LoadModule mime_magic_module libexec/apache2/mod_mime_magic.so LoadModule cern_meta_module libexec/apache2/mod_cern_meta.so LoadModule expires_module libexec/apache2/mod_expires.so LoadModule headers_module libexec/apache2/mod_headers.so LoadModule ident_module libexec/apache2/mod_ident.so LoadModule usertrack_module libexec/apache2/mod_usertrack.so #LoadModule unique_id_module libexec/apache2/mod_unique_id.so LoadModule setenvif_module libexec/apache2/mod_setenvif.so LoadModule version_module libexec/apache2/mod_version.so LoadModule proxy_module libexec/apache2/mod_proxy.so LoadModule proxy_connect_module libexec/apache2/mod_proxy_connect.so LoadModule proxy_ftp_module libexec/apache2/mod_proxy_ftp.so LoadModule proxy_http_module libexec/apache2/mod_proxy_http.so LoadModule proxy_scgi_module libexec/apache2/mod_proxy_scgi.so LoadModule proxy_ajp_module libexec/apache2/mod_proxy_ajp.so LoadModule proxy_balancer_module libexec/apache2/mod_proxy_balancer.so LoadModule ssl_module libexec/apache2/mod_ssl.so LoadModule mime_module libexec/apache2/mod_mime.so LoadModule dav_module libexec/apache2/mod_dav.so LoadModule status_module libexec/apache2/mod_status.so LoadModule autoindex_module libexec/apache2/mod_autoindex.so LoadModule asis_module libexec/apache2/mod_asis.so LoadModule info_module libexec/apache2/mod_info.so LoadModule cgi_module libexec/apache2/mod_cgi.so LoadModule dav_fs_module libexec/apache2/mod_dav_fs.so LoadModule vhost_alias_module libexec/apache2/mod_vhost_alias.so LoadModule negotiation_module libexec/apache2/mod_negotiation.so LoadModule dir_module libexec/apache2/mod_dir.so LoadModule imagemap_module libexec/apache2/mod_imagemap.so LoadModule actions_module libexec/apache2/mod_actions.so LoadModule speling_module libexec/apache2/mod_speling.so LoadModule userdir_module libexec/apache2/mod_userdir.so LoadModule alias_module libexec/apache2/mod_alias.so LoadModule rewrite_module libexec/apache2/mod_rewrite.so #LoadModule perl_module libexec/apache2/mod_perl.so LoadModule php5_module local/Cellar/php54/5.4.8/libexec/apache2/libphp5.so #LoadModule hfs_apple_module libexec/apache2/mod_hfs_apple.so <IfModule !mpm_netware_module> <IfModule !mpm_winnt_module> # # If you wish httpd to run as a different user or group, you must run # httpd as root initially and it will switch. # # User/Group: The name (or #number) of the user/group to run httpd as. # It is usually good practice to create a dedicated user and group for # running httpd, as with most system services. # User _www Group _www </IfModule> </IfModule> # 'Main' server configuration # # The directives in this section set up the values used by the 'main' # server, which responds to any requests that aren't handled by a # <VirtualHost> definition. These values also provide defaults for # any <VirtualHost> containers you may define later in the file. # # All of these directives may appear inside <VirtualHost> containers, # in which case these default settings will be overridden for the # virtual host being defined. # # # ServerAdmin: Your address, where problems with the server should be # e-mailed. This address appears on some server-generated pages, such # as error documents. e.g. [email protected] # ServerAdmin [email protected] # # ServerName gives the name and port that the server uses to identify itself. # This can often be determined automatically, but we recommend you specify # it explicitly to prevent problems during startup. # # If your host doesn't have a registered DNS name, enter its IP address here. # #ServerName www.example.com:80 # # DocumentRoot: The directory out of which you will serve your # documents. By default, all requests are taken from this directory, but # symbolic links and aliases may be used to point to other locations. # DocumentRoot "/Library/WebServer/Documents" # # Each directory to which Apache has access can be configured with respect # to which services and features are allowed and/or disabled in that # directory (and its subdirectories). # # First, we configure the "default" to be a very restrictive set of # features. # <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> # # Note that from this point forward you must specifically allow # particular features to be enabled - so if something's not working as # you might expect, make sure that you have specifically enabled it # below. # # # This should be changed to whatever you set DocumentRoot to. # <Directory "/Library/WebServer/Documents"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # # Note that "MultiViews" must be named *explicitly* --- "Options All" # doesn't give it to you. # # The Options directive is both complicated and important. Please see # http://httpd.apache.org/docs/2.2/mod/core.html#options # for more information. # Options Indexes FollowSymLinks MultiViews # # AllowOverride controls what directives may be placed in .htaccess files. # It can be "All", "None", or any combination of the keywords: # Options FileInfo AuthConfig Limit # AllowOverride None # # Controls who can get stuff from this server. # Order allow,deny Allow from all </Directory> # # DirectoryIndex: sets the file that Apache will serve if a directory # is requested. # <IfModule dir_module> DirectoryIndex index.html </IfModule> # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <FilesMatch "^\.([Hh][Tt]|[Dd][Ss]_[Ss])"> Order allow,deny Deny from all Satisfy All </FilesMatch> # # Apple specific filesystem protection. # <Files "rsrc"> Order allow,deny Deny from all Satisfy All </Files> <DirectoryMatch ".*\.\.namedfork"> Order allow,deny Deny from all Satisfy All </DirectoryMatch> # # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog "/private/var/log/apache2/error_log" # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn <IfModule log_config_module> # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> # You need to enable mod_logio.c to use %I and %O LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> # # The location and format of the access logfile (Common Logfile Format). # If you do not define any access logfiles within a <VirtualHost> # container, they will be logged here. Contrariwise, if you *do* # define per-<VirtualHost> access logfiles, transactions will be # logged therein and *not* in this file. # CustomLog "/private/var/log/apache2/access_log" common # # If you prefer a logfile with access, agent, and referer information # (Combined Logfile Format) you can use the following directive. # #CustomLog "/private/var/log/apache2/access_log" combined </IfModule> <IfModule alias_module> # # Redirect: Allows you to tell clients about documents that used to # exist in your server's namespace, but do not anymore. The client # will make a new request for the document at its new location. # Example: # Redirect permanent /foo http://www.example.com/bar # # Alias: Maps web paths into filesystem paths and is used to # access content that does not live under the DocumentRoot. # Example: # Alias /webpath /full/filesystem/path # # If you include a trailing / on /webpath then the server will # require it to be present in the URL. You will also likely # need to provide a <Directory> section to allow access to # the filesystem path. # # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the target directory are treated as applications and # run by the server when requested rather than as documents sent to the # client. The same rules about trailing "/" apply to ScriptAlias # directives as to Alias. # ScriptAliasMatch ^/cgi-bin/((?!(?i:webobjects)).*$) "/Library/WebServer/CGI-Executables/$1" </IfModule> <IfModule cgid_module> # # ScriptSock: On threaded servers, designate the path to the UNIX # socket used to communicate with the CGI daemon of mod_cgid. # #Scriptsock /private/var/run/cgisock </IfModule> # # "/Library/WebServer/CGI-Executables" should be changed to whatever your ScriptAliased # CGI directory exists, if you have that configured. # <Directory "/Library/WebServer/CGI-Executables"> AllowOverride None Options None Order allow,deny Allow from all </Directory> # # DefaultType: the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain <IfModule mime_module> # # TypesConfig points to the file containing the list of mappings from # filename extension to MIME-type. # TypesConfig /private/etc/apache2/mime.types # # AddType allows you to add to or override the MIME configuration # file specified in TypesConfig for specific file types. # #AddType application/x-gzip .tgz # # AddEncoding allows you to have certain browsers uncompress # information on the fly. Note: Not all browsers support this. # #AddEncoding x-compress .Z #AddEncoding x-gzip .gz .tgz # # If the AddEncoding directives above are commented-out, then you # probably should define those extensions to indicate media types: # AddType application/x-compress .Z AddType application/x-gzip .gz .tgz # # AddHandler allows you to map certain file extensions to "handlers": # actions unrelated to filetype. These can be either built into the server # or added with the Action directive (see below) # # To use CGI scripts outside of ScriptAliased directories: # (You will also need to add "ExecCGI" to the "Options" directive.) # #AddHandler cgi-script .cgi # For type maps (negotiated resources): #AddHandler type-map var # # Filters allow you to process content before it is sent to the client. # # To parse .shtml files for server-side includes (SSI): # (You will also need to add "Includes" to the "Options" directive.) # #AddType text/html .shtml #AddOutputFilter INCLUDES .shtml </IfModule> # # The mod_mime_magic module allows the server to use various hints from the # contents of the file itself to determine its type. The MIMEMagicFile # directive tells the module where the hint definitions are located. # #MIMEMagicFile /private/etc/apache2/magic # # Customizable error responses come in three flavors: # 1) plain text 2) local redirects 3) external redirects # # Some examples: #ErrorDocument 500 "The server made a boo boo." #ErrorDocument 404 /missing.html #ErrorDocument 404 "/cgi-bin/missing_handler.pl" #ErrorDocument 402 http://www.example.com/subscription_info.html # # # MaxRanges: Maximum number of Ranges in a request before # returning the entire resource, or one of the special # values 'default', 'none' or 'unlimited'. # Default setting is to accept 200 Ranges. #MaxRanges unlimited # # EnableMMAP and EnableSendfile: On systems that support it, # memory-mapping or the sendfile syscall is used to deliver # files. This usually improves server performance, but must # be turned off when serving from networked-mounted # filesystems or if support for these functions is otherwise # broken on your system. # #EnableMMAP off #EnableSendfile off # 6894961 TraceEnable off # Supplemental configuration # # The configuration files in the /private/etc/apache2/extra/ directory can be # included to add extra features or to modify the default configuration of # the server, or you may simply copy their contents here and change as # necessary. # Server-pool management (MPM specific) Include /private/etc/apache2/extra/httpd-mpm.conf # Multi-language error messages #Include /private/etc/apache2/extra/httpd-multilang-errordoc.conf # Fancy directory listings Include /private/etc/apache2/extra/httpd-autoindex.conf # Language settings Include /private/etc/apache2/extra/httpd-languages.conf # User home directories Include /private/etc/apache2/extra/httpd-userdir.conf # Real-time info on requests and configuration #Include /private/etc/apache2/extra/httpd-info.conf # Virtual hosts #Include /private/etc/apache2/extra/httpd-vhosts.conf # Local access to the Apache HTTP Server Manual Include /private/etc/apache2/extra/httpd-manual.conf # Distributed authoring and versioning (WebDAV) #Include /private/etc/apache2/extra/httpd-dav.conf # Various default settings #Include /private/etc/apache2/extra/httpd-default.conf # Secure (SSL/TLS) connections #Include /private/etc/apache2/extra/httpd-ssl.conf # # Note: The following must must be present to support # starting without SSL on platforms with no /dev/random equivalent # but a statically compiled-in mod_ssl. # <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> Include /private/etc/apache2/other/*.conf Please help me, I've spent 2 days trying to make it work. Btw error log keep saying [Tue Nov 20 10:47:40 2012] [error] proxy: HTTP: disabled connection for (localhost) and [Tue Nov 20 11:59:32 2012] [error] (61)Connection refused: proxy: HTTP: attempt to connect to [fe80::1]:20559 (localhost) failed

    Read the article

  • Event notification for ::SCardListReaders() [migrated]

    - by dpb
    In the PC/SC (Personal Computer Smart Card) Appln, I have (MSCAPI USB CCID based) 1) Calling ::SCardListReaders() returns SCARD_E_NO_READERS_AVAILABLE (0x8010002E). This call is made after OS starts fresh after reboot, from a thread which is part of my custom windows service. 2) Adding delay before ::SCardListReaders() call solves the problem. 3) How can I solve this problem elegantly ? Not using delay & waiting for some event to notify me. since a) Different machines may require different delay values b) Cannot loop since the error code is genuine c) Could not find this event as part of System Event Notification Service or similar COM interface d) platform is Windows 7 Any Help Appreciated.

    Read the article

  • Code for Waterglen Horse Farms application? [migrated]

    - by user73459
    I am having trouble with the solution to the Waterglens Horse Farms application in the Visual Basic 2010 Reloaded book. The problem reads: Each year Sabrina Cantrell, owner of waterglen horse farms enters four of her horses in five local horse races. She uses the table shown below to keep track of her horses in 5 local races. in the table , a 1 shows that the horse won a race, a 2 shows 2nd place, a 3 is 3rd place , and a 0 the horse didn't finish in the top 3. More details in these 2 images: http://imgur.com/a/YTNEX Here is what I have tried so far: Dim racescores(,) As Integer = {{0, 1, 0, 3, 2}, {1, 0, 2, 0, 0}, {0, 3, 0, 1, 0}, {3, 2, 1, 0, 0}} Dim subscript As Integer = 0 Dim noplace As Integer = 0 If horse1RadioButton.Checked Then Do While subscript < racescores(3, 4) If racescores(0, subscript) = 0 Then noplace = noplace + 1 End If subscript = subscript + 1 Loop noPlaceDisplayLabel.Text = noplace End If

    Read the article

< Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >