Search Results

Search found 19308 results on 773 pages for 'network efficiency'.

Page 764/773 | < Previous Page | 760 761 762 763 764 765 766 767 768 769 770 771  | Next Page >

  • How to design a C / C++ library to be usable in many client languages?

    - by Brian Schimmel
    I'm planning to code a library that should be usable by a large number of people in on a wide spectrum of platforms. What do I have to consider to design it right? To make this questions more specific, there are four "subquestions" at the end. Choice of language Considering all the known requirements and details, I concluded that a library written in C or C++ was the way to go. I think the primary usage of my library will be in programs written in C, C++ and Java SE, but I can also think of reasons to use it from Java ME, PHP, .NET, Objective C, Python, Ruby, bash scrips, etc... Maybe I cannot target all of them, but if it's possible, I'll do it. Requirements It would be to much to describe the full purpose of my library here, but there are some aspects that might be important to this question: The library itself will start out small, but definitely will grow to enormous complexity, so it is not an option to maintain several versions in parallel. Most of the complexity will be hidden inside the library, though The library will construct an object graph that is used heavily inside. Some clients of the library will only be interested in specific attributes of specific objects, while other clients must traverse the object graph in some way Clients may change the objects, and the library must be notified thereof The library may change the objects, and the client must be notified thereof, if it already has a handle to that object The library must be multi-threaded, because it will maintain network connections to several other hosts While some requests to the library may be handled synchronously, many of them will take too long and must be processed in the background, and notify the client on success (or failure) Of course, answers are welcome no matter if they address my specific requirements, or if they answer the question in a general way that matters to a wider audience! My assumptions, so far So here are some of my assumptions and conclusions, which I gathered in the past months: Internally I can use whatever I want, e.g. C++ with operator overloading, multiple inheritance, template meta programming... as long as there is a portable compiler which handles it (think of gcc / g++) But my interface has to be a clean C interface that does not involve name mangling Also, I think my interface should only consist of functions, with basic/primitive data types (and maybe pointers) passed as parameters and return values If I use pointers, I think I should only use them to pass them back to the library, not to operate directly on the referenced memory For usage in a C++ application, I might also offer an object oriented interface (Which is also prone to name mangling, so the App must either use the same compiler, or include the library in source form) Is this also true for usage in C# ? For usage in Java SE / Java EE, the Java native interface (JNI) applies. I have some basic knowledge about it, but I should definitely double check it. Not all client languages handle multithreading well, so there should be a single thread talking to the client For usage on Java ME, there is no such thing as JNI, but I might go with Nested VM For usage in Bash scripts, there must be an executable with a command line interface For the other client languages, I have no idea For most client languages, it would be nice to have kind of an adapter interface written in that language. I think there are tools to automatically generate this for Java and some others For object oriented languages, it might be possible to create an object oriented adapter which hides the fact that the interface to the library is function based - but I don't know if its worth the effort Possible subquestions is this possible with manageable effort, or is it just too much portability? are there any good books / websites about this kind of design criteria? are any of my assumptions wrong? which open source libraries are worth studying to learn from their design / interface / souce? meta: This question is rather long, do you see any way to split it into several smaller ones? (If you reply to this, do it as a comment, not as an answer)

    Read the article

  • Java CORBA Client Disconnects Immediately

    - by Benny
    I have built a Java CORBA application that subscribes to an event server. The application narrows and logs on just fine, but as soon as an event is sent to the client, it breaks with the error below. Please advise. 2010/04/25!13.00.00!E00555!enserver!EventServiceIF_i.cpp!655!PID(7390)!enserver - e._info=system exception, ID 'IDL:omg.org/CORBA/TRANSIENT:1.0' TAO exception, minor code = 54410093 (invocation connect failed; ECONNRESET), completed = NO EDIT: Please note, this only happens when running on some machines. It works on some, but not others. Even on the same platform (I've tried Windows XP/7 and CentOS linux) Some work, some don't... Here is the WireShark output...looks like the working PC is much more interactive with the network compared to the non-working PC. Working PC No. Time Source Destination Protocol Info 62 28.837255 10.10.10.209 10.10.10.250 TCP 50169 > 23120 [SYN] Seq=0 Win=8192 Len=0 MSS=1260 WS=8 63 28.907068 fe80::5de0:8d21:937e:c649 ff02::1:3 LLMNR Standard query A isatap 64 28.907166 10.10.10.209 224.0.0.252 LLMNR Standard query A isatap 65 29.107259 10.10.10.209 10.255.255.255 NBNS Name query NB ISATAP<00> 66 29.227000 10.10.10.250 10.10.10.209 TCP 23120 > 50169 [SYN, ACK] Seq=0 Ack=1 Win=32768 Len=0 MSS=1260 WS=0 67 29.227032 10.10.10.209 10.10.10.250 TCP 50169 > 23120 [ACK] Seq=1 Ack=1 Win=66560 Len=0 68 29.238063 10.10.10.209 10.10.10.250 GIOP GIOP 1.1 Request s=326 id=5 (two-way): op=logon 69 29.291765 10.10.10.250 10.10.10.209 GIOP GIOP 1.1 Reply s=420 id=5: No Exception 70 29.301395 10.10.10.209 10.10.10.250 GIOP GIOP 1.1 Request s=369 id=6 (two-way): op=registerEventStat 71 29.348275 10.10.10.250 10.10.10.209 GIOP GIOP 1.1 Reply s=60 id=6: No Exception 72 29.405250 10.10.10.209 10.10.10.250 TCP 50170 > telnet [SYN] Seq=0 Win=8192 Len=0 MSS=1260 WS=8 73 29.446055 10.10.10.250 10.10.10.209 TCP telnet > 50170 [SYN, ACK] Seq=0 Ack=1 Win=32768 Len=0 MSS=1260 WS=0 74 29.446128 10.10.10.209 10.10.10.250 TCP 50170 > telnet [ACK] Seq=1 Ack=1 Win=66560 Len=0 75 29.452021 10.10.10.209 10.10.10.250 TELNET Telnet Data ... 76 29.483537 10.10.10.250 10.10.10.209 TELNET Telnet Data ... 77 29.483651 10.10.10.209 10.10.10.250 TELNET Telnet Data ... 78 29.523463 10.10.10.250 10.10.10.209 TCP telnet > 50170 [ACK] Seq=4 Ack=5 Win=32768 Len=0 79 29.554954 10.10.10.209 10.10.10.250 TCP 50169 > 23120 [ACK] Seq=720 Ack=505 Win=66048 Len=0 Non-working PC No. Time Source Destination Protocol Info 1 0.000000 10.10.10.209 10.10.10.250 TCP 64161 > 23120 [SYN] Seq=0 Win=8192 Len=0 MSS=1260 WS=8 2 2.999847 10.10.10.209 10.10.10.250 TCP 64161 > 23120 [SYN] Seq=0 Win=8192 Len=0 MSS=1260 WS=8 3 4.540773 Cisco_3c:78:00 Cisco-Li_55:87:72 ARP Who has 10.0.0.1? Tell 10.10.10.209 4 4.540843 Cisco-Li_55:87:72 Cisco_3c:78:00 ARP 10.0.0.1 is at 00:1a:70:55:87:72 5 8.992284 10.10.10.209 10.10.10.250 TCP 64161 > 23120 [SYN] Seq=0 Win=8192 Len=0 MSS=1260

    Read the article

  • jqGrid multiplesearch - How do I add and/or column for each row?

    - by jimasp
    I have a jqGrid multiple search dialog as below: (Note the first empty td in each row). What I want is a search grid like this (below), where: The first td is filled in with "and/or" accordingly. The corresponding search filters are also built that way. The empty td is there by default, so I assume that this is a standard jqGrid feature that I can turn on, but I can't seem to find it in the docs. My code looks like this: <script type="text/javascript"> $(document).ready(function () { var lastSel; var pageSize = 10; // the initial pageSize $("#grid").jqGrid({ url: '/Ajax/GetJqGridActivities', editurl: '/Ajax/UpdateJqGridActivities', datatype: "json", colNames: [ 'Id', 'Description', 'Progress', 'Actual Start', 'Actual End', 'Status', 'Scheduled Start', 'Scheduled End' ], colModel: [ { name: 'Id', index: 'Id', searchoptions: { sopt: ['eq', 'ne']} }, { name: 'Description', index: 'Description', searchoptions: { sopt: ['eq', 'ne']} }, { name: 'Progress', index: 'Progress', editable: true, searchoptions: { sopt: ['eq', 'ne', "gt", "ge", "lt", "le"]} }, { name: 'ActualStart', index: 'ActualStart', editable: true, searchoptions: { sopt: ['eq', 'ne', "gt", "ge", "lt", "le"]} }, { name: 'ActualEnd', index: 'ActualEnd', editable: true, searchoptions: { sopt: ['eq', 'ne', "gt", "ge", "lt", "le"]} }, { name: 'Status', index: 'Status.Name', editable: true, searchoptions: { sopt: ['eq', 'ne']} }, { name: 'ScheduledStart', index: 'ScheduledStart', searchoptions: { sopt: ['eq', 'ne', "gt", "ge", "lt", "le"]} }, { name: 'ScheduledEnd', index: 'ScheduledEnd', searchoptions: { sopt: ['eq', 'ne', "gt", "ge", "lt", "le"]} } ], jsonReader: { root: 'rows', id: 'Id', repeatitems: false }, rowNum: pageSize, // this is the pageSize rowList: [pageSize, 50, 100], pager: '#pager', sortname: 'Id', autowidth: true, shrinkToFit: false, viewrecords: true, sortorder: "desc", caption: "JSON Example", onSelectRow: function (id) { if (id && id !== lastSel) { $('#grid').jqGrid('restoreRow', lastSel); $('#grid').jqGrid('editRow', id, true); lastSel = id; } } }); // change the options (called via searchoptions) var updateGroupOpText = function ($form) { $('select.opsel option[value="AND"]', $form[0]).text('and'); $('select.opsel option[value="OR"]', $form[0]).text('or'); $('select.opsel', $form[0]).trigger('change'); }; // $(window).bind('resize', function() { // ("#grid").setGridWidth($(window).width()); //}).trigger('resize'); // paging bar settings (inc. buttons) // and advanced search $("#grid").jqGrid('navGrid', '#pager', { edit: true, add: false, del: false }, // buttons {}, // edit option - these are ordered params! {}, // add options {}, // del options {closeOnEscape: true, multipleSearch: true, closeAfterSearch: true, onInitializeSearch: updateGroupOpText, afterRedraw: updateGroupOpText}, // search options {} ); // TODO: work out how to use global $.ajaxSetup instead $('#grid').ajaxError(function (e, jqxhr, settings, exception) { var str = "Ajax Error!\n\n"; if (jqxhr.status === 0) { str += 'Not connect.\n Verify Network.'; } else if (jqxhr.status == 404) { str += 'Requested page not found. [404]'; } else if (jqxhr.status == 500) { str += 'Internal Server Error [500].'; } else if (exception === 'parsererror') { str += 'Requested JSON parse failed.'; } else if (exception === 'timeout') { str += 'Time out error.'; } else if (exception === 'abort') { str += 'Ajax request aborted.'; } else { str += 'Uncaught Error.\n' + jqxhr.responseText; } alert(str); }); $("#grid").jqGrid('bindKeys'); $("#grid").jqGrid('inlineNav', "#grid"); }); </script> <table id="grid"/> <div id="pager"/>

    Read the article

  • How to close a JFrame in the middle of a program

    - by Nick Gibson
    public class JFrameWithPanel extends JFrame implements ActionListener, ItemListener { int packageIndex; double price; double[] prices = {49.99, 39.99, 34.99, 99.99}; DecimalFormat money = new DecimalFormat("$0.00"); JLabel priceLabel = new JLabel("Total Price: "+price); JButton button = new JButton("Check Price"); JComboBox packageChoice = new JComboBox(); JPanel pane = new JPanel(); TextField text = new TextField(5); JButton accept = new JButton("Accept"); JButton decline = new JButton("Decline"); JCheckBox serviceTerms = new JCheckBox("I Agree to the Terms of Service.", false); JTextArea termsOfService = new JTextArea("This is a text area", 5, 10); public JFrameWithPanel() { super("JFrame with Panel"); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); pane.add(packageChoice); setContentPane(pane); setSize(250,250); setVisible(true); packageChoice.addItem("A+ Certification"); packageChoice.addItem("Network+ Certification "); packageChoice.addItem("Security+ Certifictation"); packageChoice.addItem("CIT Full Test Package"); pane.add(button); button.addActionListener(this); pane.add(text); text.setEditable(false); text.setBackground(Color.WHITE); text.addActionListener(this); pane.add(termsOfService); termsOfService.setEditable(false); termsOfService.setBackground(Color.lightGray); pane.add(serviceTerms); serviceTerms.addItemListener(this); pane.add(accept); accept.addActionListener(this); pane.add(decline); decline.addActionListener(this); } public void actionPerformed(ActionEvent e) { packageIndex = packageChoice.getSelectedIndex(); price = prices[packageIndex]; text.setText("$"+price); Object source = e.getSource(); if(source == accept) { if(serviceTerms.isSelected() == false) { JOptionPane.showMessageDialog(null,"Please accept the terms of service.", "Terms of Service", JOptionPane.ERROR_MESSAGE); } else { JOptionPane.showMessageDialog(null,"Thank you. We will now move on to registering your product."); pane.dispose(); } } else if(source == decline) { System.exit(0); } } public void itemStateChanged(ItemEvent e) { int select = e.getStateChange(); } public static void main(String[] args) { String value1; int constant = 1, invalidNum = 0, answerParse, packNum, packPrice; JOptionPane.showMessageDialog(null,"Hello!"+"\nWelcome to the CIT Test Program."); JOptionPane.showMessageDialog(null,"IT WORKS!"); } }//class How do I get this frame to close so that my JOptionPane Message Dialogs can continue in the program without me exiting the program completely. EDIT: I tried .dispose() but I get this: cannot find symbol symbol : method dispose() location: class javax.swing.JPanel pane.dispose(); ^

    Read the article

  • Sometimes big delay when using PHP mail()

    - by Robbert Dam
    I have a website which processes orders from a Windows application. This works as follows: User clicks "Order now" in the windows app App uploads a file with POST to a PHP script The script immediately calls the PHP mail() function (order is not stored in a db) This works fine most of the time. However, sometimes a big delay occurs (several days). Customers calls why the product has not yet been delivered. E-mail headers of delayed mail follows: Microsoft Mail Internet Headers Version 2.0 Received: from barracuda.nkl.nl ([10.0.0.1]) by smtp.nkl.nl with Microsoft SMTPSVC(6.0.3790.3959); Wed, 26 May 2010 16:26:51 +0200 X-ASG-Debug-ID: 1274883818-2f8800000000-X58hIK X-Barracuda-URL: http://10.0.0.1:8000/cgi-bin/mark.cgi Received: from server45.firstfind.nl (localhost [127.0.0.1]) by barracuda.nkl.nl (Spam & Virus Firewall) with ESMTP id ECAFD15776A for <[email protected]>; Wed, 26 May 2010 16:23:38 +0200 (CEST) Received: from server45.firstfind.nl (server45.firstfind.nl [93.94.226.76]) by barracuda.nkl.nl with ESMTP id 85bAT2AU58kkxjPb for <[email protected]>; Wed, 26 May 2010 16:23:38 +0200 (CEST) X-Barracuda-Envelope-From: [email protected] Received: from server45.firstfind.nl (localhost [127.0.0.1]) by server45.firstfind.nl (8.13.8/8.13.8/Debian-3+etch1) with ESMTP id o4QEM3Hb004301 for <[email protected]>; Wed, 26 May 2010 16:23:31 +0200 Received: (from nklsemin@localhost) by server45.firstfind.nl (8.13.8/8.13.8/Submit) id o4J9lA7M031307; Wed, 19 May 2010 11:47:10 +0200 Date: Wed, 19 May 2010 11:47:10 +0200 Message-Id: <[email protected]> To: [email protected] X-ASG-Orig-Subj: easyfit - ref: Hoen3443 Subject: easyfit - ref: Hoen3443 X-PHP-Script: www.nklseminar.nl/emailer/upload.php for 77.61.220.217 From: [email protected] Content-type: text/html X-Virus-Scanned: by amavisd-new X-Barracuda-Connect: server45.firstfind.nl[93.94.226.76] X-Barracuda-Start-Time: 1274883820 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by Barracuda Spam & Virus Firewall at nkl.nl X-Barracuda-Spam-Score: 0.92 X-Barracuda-Spam-Status: No, SCORE=0.92 using global scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests=DATE_IN_PAST_96_XX, DATE_IN_PAST_96_XX_2, HTML_MESSAGE, MIME_HEADER_CTYPE_ONLY, MIME_HTML_ONLY, NO_REAL_NAME X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.2.30817 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.00 NO_REAL_NAME From: does not include a real name 0.01 DATE_IN_PAST_96_XX Date: is 96 hours or more before Received: date 0.00 MIME_HTML_ONLY BODY: Message only has text/html MIME parts 0.00 HTML_MESSAGE BODY: HTML included in message 0.86 MIME_HEADER_CTYPE_ONLY 'Content-Type' found without required MIME headers 2.07 DATE_IN_PAST_96_XX_2 DATE_IN_PAST_96_XX_2 Return-Path: [email protected] X-OriginalArrivalTime: 26 May 2010 14:26:51.0343 (UTC) FILETIME=[7B80DDF0:01CAFCDF] The delay seems to occur here: Received: from server45.firstfind.nl (localhost [127.0.0.1]) by server45.firstfind.nl (8.13.8/8.13.8/Debian-3+etch1) with ESMTP id o4QEM3Hb004301 for <[email protected]>; Wed, 26 May 2010 16:23:31 +0200 Received: (from nklsemin@localhost) by server45.firstfind.nl (8.13.8/8.13.8/Submit) id o4J9lA7M031307; Wed, 19 May 2010 11:47:10 +0200 I've reported this issue various times to the web hosting service that hosts my website. They say the delay does not occur in their network (impossible). But they do confirm that the e-mail is first seen in their mail server on May 26, which is 7 days after the mail has been composed. The order is marked with the timestamp of the user's local PC, which also matches May 19 (so it's not a PC clock problem) It's also interesting to see that all delayed mails (orders were placed on different days) come in at once. So I suddenly receive 14 e-mail in my mailbox from various days. Any idea were this delay may be introduced? Could there be a bug in my PHP code that causes this? (I cannot believe I can introduce a loop of 7 days in my PHP code)

    Read the article

  • Getting the DirectShow VideoRender filter to respond to MediaType changes on its Input Pin?

    - by Jonathan Websdale
    Below is the code extract from my decoder transform filter which takes in data from my source filter which is taking RTP network data from an IP camera. The source filter, decode filter can dynamically respond to changes in the camera image dimensions since I need to handle resolution changes in the decode library. I've used the 'ReceiveConnection' method as described in the DirectShow help, passing the new MediaType data in the next sample. However, I can't get the Video Mixing Renderer to accept the resolution changes dynamically even though the renderer will render the different resolution if the graph is stopped and restarted. Can anyone point out what I need to do to get the renderer to handle dynamic resolution changes? HRESULT CDecoder::Receive(IMediaSample* pIn) { //Input data does not necessarily correspond one-to-one //with output frames, so we must override Receive instead //of Transform. HRESULT hr = S_OK; //Deliver input to library long cBytes = pIn->GetActualDataLength(); BYTE* pSrc; pIn->GetPointer(&pSrc); try { hr = m_codec.Decode(pSrc, cBytes, (hr == S_OK)?&tStart : NULL); } catch (...) { hr = E_UNEXPECTED; } if (FAILED(hr)) { if (theLog.enabled()){theLog.strm() << "Decoder Error " << hex << hr << dec << " - resetting input"; theLog.write();} //Force reset of decoder m_bReset = true; m_codec.ResetInput(); //We have handled the error -- don't pass upstream or the source may stop. return S_OK; } //Extract and deliver any decoded frames hr = DeliverDecodedFrames(); return hr; } HRESULT CDecoder::DeliverDecodedFrames() { HRESULT hr = S_OK; for (;;) { DecodedFrame frame; bool bFrame = m_codec.GetDecodedFrame(frame); if (!bFrame) { break; } CMediaType mtIn; CMediaType mtOut; GetMediaType( PINDIR_INPUT, &mtIn); GetMediaType( PINDIR_OUTPUT, &mtOut); //Get the output pin's current image resolution VIDEOINFOHEADER* pvi = (VIDEOINFOHEADER*)mtOut.Format(); if( pvi->bmiHeader.biWidth != m_cxInput || pvi->bmiHeader.biHeight != m_cyInput) { HRESULT hr = GetPin(PINDIR_OUTPUT)->GetConnected()->ReceiveConnection(GetPin(PINDIR_OUTPUT), &mtIn); if(SUCCEEDED(hr)) { SetMediaType(PINDIR_OUTPUT, &mtIn); } } IMediaSamplePtr pOut; hr = m_pOutput->GetDeliveryBuffer(&pOut, 0, 0, NULL); if (FAILED(hr)) { break; } AM_MEDIA_TYPE* pmt; if (pOut->GetMediaType(&pmt) == S_OK) { CMediaType mt(*pmt); DeleteMediaType(pmt); SetMediaType(PINDIR_OUTPUT, &mt); pOut->SetMediaType(&mt); } // crop, tramslate and deliver BYTE* pDest; pOut->GetPointer(&pDest); m_pConverter->Convert(frame.Width(), frame.Height(), frame.GetY(), frame.GetU(), frame.GetV(), pDest); pOut->SetActualDataLength(m_pOutput->CurrentMediaType().GetSampleSize()); pOut->SetSyncPoint(true); if (frame.HasTimestamp()) { REFERENCE_TIME tStart = frame.Timestamp(); REFERENCE_TIME tStop = tStart+1; pOut->SetTime(&tStart, &tStop); } m_pOutput->Deliver(pOut); } return hr; }

    Read the article

  • Geolocation through Android's GPS Provider on a website?

    - by Corey Ogburn
    I'm trying to get the geolocation of the mobile device in a regular website, not a webview of an application or anything native like that. I'm getting a location, but it's highly inaccurate, the accuracy comes back as 3230 or some other outrageous number. I'm assuming that's in meters, either way it's not nearly accurate enough. By comparison, the same webpage on a laptop gets an accuracy of 30-40. My first thought was that it was using the Network Provider instead of the GPS Provider, telling me where I am based on tower location and reach. A little research later I found enableHighAccuracy and set it true in the options that I pass. After including that, I still notice no difference. Here's the test page's HTML/javascript: <html> <head> <script type="text/javascript" src="http://ecn.dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=7.0"></script> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.3/jquery.min.js"></script> <script type="text/javascript"> function OnLoad() { $("#Status").text("Init"); if (navigator.geolocation) { $("#Status").text("Supports Geolocation"); navigator.geolocation.getCurrentPosition(HandleLocation, LocationError, { enableHighAccuracy: true }); $("#Status").text("Sent position request..."); } else { $("#Status").text("Doesn't support geolocation"); } } function HandleLocation(position) { $("#Status").text("Received response:"); $("#Position").text("(" + position.coords.latitude + ", " + position.coords.longitude + ") accuracy: " + position.coords.accuracy); var loc = new Microsoft.Maps.Location(position.coords.latitude, position.coords.longitude); GetMap(loc); } function LocationError(error) { switch(error.code) { case error.PERMISSION_DENIED: alert("Location not provided"); break; case error.POSITION_UNAVAILABLE: alert("Current location not available"); break; case error.TIMEOUT: alert("Timeout"); break; default: alert("unknown error"); break; } } function GetMap(loc) { var map = new Microsoft.Maps.Map(document.getElementById("mapDiv"), {credentials: "Aj59meaCR1e7rNgkfQy7j08Pd3mzfP1r04hGesGmLe2a3ZwZ3iGecwPX2SNPWq5a", center: loc, mapTypeId: Microsoft.Maps.MapTypeId.road, zoom: 15}); } </script> </head> <body onload="javascript:OnLoad()"> <div id="Status"></div> <div id="Position"></div><br/> <div id='mapDiv' style="position:relative; width:600px; height:400px;"></div> </body> </html> I'm testing this on a rooted MyTouch 3G running Cyanogen 6.1 stable, Android 2.2 and GPS is enabled. In case rooting was a problem, I have also had various friends and coworkers try the webpage on their non-rooted 2.0+ Android devices. Each phone had various effects on the accuracy, but none were better than 1000, I attribute this to the different carriers. I have not (but eventually will) tested with iPhone or other location-aware cell phones.

    Read the article

  • Whatever hover not working IE6 and IE7

    - by jovialwhispers
    Below is the css for my menu #menu { position: absolute; left: 170px; top: 92px; background: #336699; float: left; z-index:50; } menu ul { list-style: none; margin: 0; padding: 0; width: 9em; float: left; } menu a, #menu h2 { font: bold 11px/20px arial, helvetica, sans-serif; display: block; border-top-width: 1px; border-bottom-width: 1px; border-left-width: 1px; border-right-width: 1px; border-style: solid; border-color: #ccc #888 #555 #bbb; margin: 0; padding: 4px 3px; } menu h2 { color: #fff; background: #336699; text-transform: uppercase; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 1px; border-right-width: 1px; } menu a { color: #fff; background: #79A3CF; text-decoration: none; } menu a:hover { color: #a00; background: #fff; } menu li { position: relative; } menu li ul li { position: relative; width: 12em; } menu ul ul { position: absolute; z-index: 500; } menu ul ul ul { position: absolute; top: 0; left: 100%; } div#menu ul ul, div#menu ul li:hover ul ul, div#menu ul ul li:hover ul ul { display: none; } div#menu ul li:hover ul, div#menu ul ul li:hover ul, div#menu ul ul ul li:hover ul { display: block; } Here is my html design for the menu. It is a horizontal menu with dropdown submenus <div id="menu"> <ul> <li><h2>Computers</h2> <ul> <li>subitem <ul><li>subitems</li> </li> <li>submitem</li> <li>submitem</li> </li> <li><h2>Network</h2> <ul><li>subitems</li> </li> </ul> </div> I am using the minified version of the whatever hover script. Checked few posts here on this issue but couldn't solve the problem. The submenus are not appearing on IE6 and IE7. I trid adding <!--[if lt IE 8]> but no results...

    Read the article

  • Visual Studio reports that not all code path return a value, even though they do

    - by chris12892
    I have an API in NETMF C# that I am writing that includes a function to send an HTTP request. For those who are familiar with NETMF, this is a heavily modified version of the "webClient" example, which a simple application that demonstrates how to submit an HTTP request, and recive a response. In the sample, it simply prints the response and returns void,. In my version, however, I need it to return the HTTP response. For some reason, Visual Studio reports that not all code paths return a value, even though, as far as I can tell, they do. Here is my code... /// <summary> /// This is a modified webClient /// </summary> /// <param name="url"></param> private string httpRequest(string url) { // Create an HTTP Web request. HttpWebRequest request = HttpWebRequest.Create(url) as HttpWebRequest; // Set request.KeepAlive to use a persistent connection. request.KeepAlive = true; // Get a response from the server. WebResponse resp = request.GetResponse(); // Get the network response stream to read the page data. if (resp != null) { Stream respStream = resp.GetResponseStream(); string page = ""; byte[] byteData = new byte[4096]; char[] charData = new char[4096]; int bytesRead = 0; Decoder UTF8decoder = System.Text.Encoding.UTF8.GetDecoder(); int totalBytes = 0; // allow 5 seconds for reading the stream respStream.ReadTimeout = 5000; // If we know the content length, read exactly that amount of // data; otherwise, read until there is nothing left to read. if (resp.ContentLength != -1) { for (int dataRem = (int)resp.ContentLength; dataRem > 0; ) { Thread.Sleep(500); bytesRead = respStream.Read(byteData, 0, byteData.Length); if (bytesRead == 0) throw new Exception("Data laes than expected"); dataRem -= bytesRead; // Convert from bytes to chars, and add to the page // string. int byteUsed, charUsed; bool completed = false; totalBytes += bytesRead; UTF8decoder.Convert(byteData, 0, bytesRead, charData, 0, bytesRead, true, out byteUsed, out charUsed, out completed); page = page + new String(charData, 0, charUsed); } page = new String(System.Text.Encoding.UTF8.GetChars(byteData)); } else throw new Exception("No content-Length reported"); // Close the response stream. For Keep-Alive streams, the // stream will remain open and will be pushed into the unused // stream list. resp.Close(); return page; } } Any ideas? Thanks...

    Read the article

  • How to generalize a method call in Java (to avoid code duplication)

    - by dln385
    I have a process that needs to call a method and return its value. However, there are several different methods that this process may need to call, depending on the situation. If I could pass the method and its arguments to the process (like in Python), then this would be no problem. However, I don't know of any way to do this in Java. Here's a concrete example. (This example uses Apache ZooKeeper, but you don't need to know anything about ZooKeeper to understand the example.) The ZooKeeper object has several methods that will fail if the network goes down. In this case, I always want to retry the method. To make this easy, I made a "BetterZooKeeper" class that inherits the ZooKeeper class, and all of its methods automatically retry on failure. This is what the code looked like: public class BetterZooKeeper extends ZooKeeper { private void waitForReconnect() { // logic } @Override public Stat exists(String path, Watcher watcher) { while (true) { try { return super.exists(path, watcher); } catch (KeeperException e) { // We will retry. } waitForReconnect(); } } @Override public byte[] getData(String path, boolean watch, Stat stat) { while (true) { try { return super.getData(path, watch, stat); } catch (KeeperException e) { // We will retry. } waitForReconnect(); } } @Override public void delete(String path, int version) { while (true) { try { super.delete(path, version); return; } catch (KeeperException e) { // We will retry. } waitForReconnect(); } } } (In the actual program there is much more logic and many more methods that I took out of the example for simplicity.) We can see that I'm using the same retry logic, but the arguments, method call, and return type are all different for each of the methods. Here's what I did to eliminate the duplication of code: public class BetterZooKeeper extends ZooKeeper { private void waitForReconnect() { // logic } @Override public Stat exists(final String path, final Watcher watcher) { return new RetryableZooKeeperAction<Stat>() { @Override public Stat action() { return BetterZooKeeper.super.exists(path, watcher); } }.run(); } @Override public byte[] getData(final String path, final boolean watch, final Stat stat) { return new RetryableZooKeeperAction<byte[]>() { @Override public byte[] action() { return BetterZooKeeper.super.getData(path, watch, stat); } }.run(); } @Override public void delete(final String path, final int version) { new RetryableZooKeeperAction<Object>() { @Override public Object action() { BetterZooKeeper.super.delete(path, version); return null; } }.run(); return; } private abstract class RetryableZooKeeperAction<T> { public abstract T action(); public final T run() { while (true) { try { return action(); } catch (KeeperException e) { // We will retry. } waitForReconnect(); } } } } The RetryableZooKeeperAction is parameterized with the return type of the function. The run() method holds the retry logic, and the action() method is a placeholder for whichever ZooKeeper method needs to be run. Each of the public methods of BetterZooKeeper instantiates an anonymous inner class that is a subclass of the RetryableZooKeeperAction inner class, and it overrides the action() method. The local variables are (strangely enough) implicitly passed to the action() method, which is possible because they are final. In the end, this approach does work and it does eliminate the duplication of the retry logic. However, it has two major drawbacks: (1) it creates a new object every time a method is called, and (2) it's ugly and hardly readable. Also I had to workaround the 'delete' method which has a void return value. So, here is my question: is there a better way to do this in Java? This can't be a totally uncommon task, and other languages (like Python) make it easier by allowing methods to be passed. I suspect there might be a way to do this through reflection, but I haven't been able to wrap my head around it.

    Read the article

  • android ftp upload has stopped error

    - by Goxel Arp
    class Asenkron extends AsyncTask<String,Integer,Long> { @Override protected Long doInBackground(String... aurl) { FTPClient con=null; try ` { con = new FTPClient(); con.connect(aurl[0]); if (con.login(aurl[1], aurl[2])) { con.enterLocalPassiveMode(); // important! con.setFileType(http://FTP.BINARY_FILE_TYPE); FileInputStream in = new FileInputStream(new File(aurl[3])); boolean result = con.storeFile(aurl[3], in); in.close(); con.logout(); con.disconnect(); } } catch (Exception e) { Toast.makeText(getApplicationContext(), e.toString(), Toast.LENGTH_LONG).show(); } return null; } protected void onPostExecute(String result) {} } I AM USING THIS CLASS LIKE BELOW.THERE IS BUTTON AND WHENEVER I CLICK THE BUTTON IT SHOULD START FTP UPLOAD PROCESS IN BACKGROUND BUT I GET "PROGRAM HAS STOPPED UNFORTUNATELY" ERROR. Assume that The ftp address and username password pathfile sections are true and I get the internet and network permissions already by the way ... button1.setOnClickListener(new OnClickListener() { public void onClick(View arg0) { new Asenkron().execute("ftpaddress","username","pass","pathfileon telephone"); } }); And here is the logcat for you to analyse the potential error and help me ... 10-13 13:01:25.591: I/dalvikvm(633): threadid=3: reacting to signal 3 10-13 13:01:25.711: I/dalvikvm(633): Wrote stack traces to '/data/anr/traces.txt' 10-13 13:01:25.921: D/gralloc_goldfish(633): Emulator without GPU emulation detected. 10-13 13:01:31.441: W/dalvikvm(633): threadid=11: thread exiting with uncaught exception (group=0x409c01f8) 10-13 13:01:31.461: E/AndroidRuntime(633): FATAL EXCEPTION: AsyncTask #1 10-13 13:01:31.461: E/AndroidRuntime(633): java.lang.RuntimeException: An error occured while executing doInBackground() 10-13 13:01:31.461: E/AndroidRuntime(633): at android.os.AsyncTask$3.done(AsyncTask.java:278) 10-13 13:01:31.461: E/AndroidRuntime(633): at java.util.concurrent.FutureTask$Sync.innerSetException(FutureTask.java:273) 10-13 13:01:31.461: E/AndroidRuntime(633): at java.util.concurrent.FutureTask.setException(FutureTask.java:124) 10-13 13:01:31.461: E/AndroidRuntime(633): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:307) 10-13 13:01:31.461: E/AndroidRuntime(633): at java.util.concurrent.FutureTask.run(FutureTask.java:137) 10-13 13:01:31.461: E/AndroidRuntime(633): at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:208) 10-13 13:01:31.461: E/AndroidRuntime(633): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1076) 10-13 13:01:31.461: E/AndroidRuntime(633): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:569) 10-13 13:01:31.461: E/AndroidRuntime(633): at java.lang.Thread.run(Thread.java:856) 10-13 13:01:31.461: E/AndroidRuntime(633): Caused by: java.lang.RuntimeException: Can't create handler inside thread that has not called Looper.prepare() 10-13 13:01:31.461: E/AndroidRuntime(633): at android.os.Handler.<init>(Handler.java:121) 10-13 13:01:31.461: E/AndroidRuntime(633): at android.widget.Toast$TN.<init>(Toast.java:317) 10-13 13:01:31.461: E/AndroidRuntime(633): at android.widget.Toast.<init>(Toast.java:91) 10-13 13:01:31.461: E/AndroidRuntime(633): at android.widget.Toast.makeText(Toast.java:233) 10-13 13:01:31.461: E/AndroidRuntime(633): at com.example.ftpodak.ODAK$Asenkron.doInBackground(ODAK.java:74) 10-13 13:01:31.461: E/AndroidRuntime(633): at com.example.ftpodak.ODAK$Asenkron.doInBackground(ODAK.java:1) 10-13 13:01:31.461: E/AndroidRuntime(633): at android.os.AsyncTask$2.call(AsyncTask.java:264) 10-13 13:01:31.461: E/AndroidRuntime(633): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:305) 10-13 13:01:31.461: E/AndroidRuntime(633): ... 5 more By the way I changed the relevant code like that ; instead of catch (Exception e) { Toast.makeText(getApplicationContext(), e.toString(), Toast.LENGTH_LONG).show(); } I replaced with this code catch (Exception e) { HATA=e.toString(); } And I added the code to button textview1.setText(HATA); So I can see the error on the textview and it is writing that "Android java.net.UnknownHostException: Host is unresolved" But i know that the ftp server is correct and I check the ftp server from the AndFTP application. With the same address login and pass information ftp server is working.So the problem is in my code I think.Any help will be too much appreciated.Anyone who can help me I can give teamviewer to analyse what is the problem ...

    Read the article

  • Arduino Ethernet Shield Not Connecting to WebServer

    - by new user
    I have a problem making my Arduino Ethernet shield to communicate with the server, the result on the serial monitor is always: my arduino code is #include <Ethernet.h> //library for ethernet functions #include <SPI.h> #include <Dns.h> #include <Client.h> //library for client functions #include <DallasTemperature.h> //library for temperature sensors // Ethernet settings byte mac[] = {0x09,0xA2,0xDA,0x00,0x01,0x26}; //Replace with your Ethernet shield MAC byte ip[] = { 192,168,0,54}; //The Arduino device IP address byte subnet[] = { 255,255,255,0}; byte gateway[] = { 192,168,0,1}; IPAddress server(192,168,0,53); // IP-adress of server arduino sends data to EthernetClient client; bool connected = false; void setup(void) { Serial.begin(9600); Serial.println("Initializing Ethernet."); delay(1000); Ethernet.begin(mac, ip , gateway , subnet); } void loop(void) { if(!connected) { Serial.println("Not connected"); if (client.connect(server, 80)) { connected = true; int temp =analogRead(A1); Serial.print("Temp is "); Serial.println(temp); Serial.println(); Serial.println("Sending to Server: "); client.print("GET /formSubmit.php?t0="); Serial.print("GET /formSubmit.php?t0="); client.print(temp); Serial.print(temp); client.println(" HTTP/1.1"); Serial.println(" HTTP/1.1"); client.println("Host: http://localhost/PhpProject1/"); Serial.println("Host: http://localhost/PhpProject1/"); client.println("User-Agent: Arduino"); Serial.println("User-Agent: Arduino"); client.println("Accept: text/html"); Serial.println("Accept: text/html"); //client.println("Connection: close"); //Serial.println("Connection: close"); client.println(); Serial.println(); delay(10000); } else{ Serial.println("Cannot connect to Server"); } } else { delay(1000); while (client.connected() && client.available()) { char c = client.read(); Serial.print(c); } Serial.println(); client.stop(); connected = false; } } the server is an Apache server running on a pc, the server ip address in the code is the pc ip address. For testing purposes I work at my homes network, there's no proxy or firewall, and I also turned of the antivirus and firewall on my pc. the result in the serial monitor is always: Not connected Cannot connect to Server Any thoughts??

    Read the article

  • [C++] Adding a string or char array to a byte vector

    - by xeross
    I'm currently working on a class to create and read out packets send through the network, so far I have it working with 16bit and 8bit integers (Well unsigned but still). Now the problem is I've tried numerous ways of copying it over but somehow the _buffer got mangled, it segfaulted, or the result was wrong. I'd appreciate if someone could show me a working example. My current code can be seen below. Thanks, Xeross Main #include <iostream> #include <stdio.h> #include "Packet.h" using namespace std; int main(int argc, char** argv) { cout << "#################################" << endl; cout << "# Internal Use Only #" << endl; cout << "# Codename PACKETSTORM #" << endl; cout << "#################################" << endl; cout << endl; Packet packet = Packet(); packet.SetOpcode(0x1f4d); cout << "Current opcode is: " << packet.GetOpcode() << endl << endl; packet.add(uint8_t(5)) .add(uint16_t(4000)) .add(uint8_t(5)); for(uint8_t i=0; i<10;i++) printf("Byte %u = %x\n", i, packet._buffer[i]); printf("\nReading them out: \n1 = %u\n2 = %u\n3 = %u\n4 = %s", packet.readUint8(), packet.readUint16(), packet.readUint8()); return 0; } Packet.h #ifndef _PACKET_H_ #define _PACKET_H_ #include <iostream> #include <vector> #include <stdio.h> #include <stdint.h> #include <string.h> using namespace std; class Packet { public: Packet() : m_opcode(0), _buffer(0), _wpos(0), _rpos(0) {} Packet(uint16_t opcode) : m_opcode(opcode), _buffer(0), _wpos(0), _rpos(0) {} uint16_t GetOpcode() { return m_opcode; } void SetOpcode(uint16_t opcode) { m_opcode = opcode; } Packet& add(uint8_t value) { if(_buffer.size() < _wpos + 1) _buffer.resize(_wpos + 1); memcpy(&_buffer[_wpos], &value, 1); _wpos += 1; return *this; } Packet& add(uint16_t value) { if(_buffer.size() < _wpos + 2) _buffer.resize(_wpos + 2); memcpy(&_buffer[_wpos], &value, 2); _wpos += 2; return *this; } uint8_t readUint8() { uint8_t result = _buffer[_rpos]; _rpos += sizeof(uint8_t); return result; } uint16_t readUint16() { uint16_t result; memcpy(&result, &_buffer[_rpos], sizeof(uint16_t)); _rpos += sizeof(uint16_t); return result; } uint16_t m_opcode; std::vector<uint8_t> _buffer; protected: size_t _wpos; // Write position size_t _rpos; // Read position }; #endif // _PACKET_H_

    Read the article

  • Why doesn't PHP's oci_connect return false?

    - by absolethe
    I have a situation in which we have two production databases that synchronize with one other. Server One is considered the primary. Sometimes due to maintenance or a disaster Server Two will become primary. In some of our code that means we have to manually go in and edit the server name for database connections. I find this annoying, so the last thing I wrote I put the server information for both and set up a loop. If oci_connect failed on the Server One 3 times it would move on to Server Two. If Server Two failed 3 times it would notify the user a connection couldn't be made. This has worked fine most times we've had the situation of switching the servers. Yesterday, for example, it worked fine. Today it didn't. It just sat and spun endlessly. No error in the PHP error log. No failure to move on from. No error output to the screen. Nothing for 5 minutes. So then I had to manually edit the stupid config file. I asked what could possibly be different and I was told "yesterday the database was down, but not the server. today the server is down." Okay...? But I don't see a distinction. I would expect oci_connect to return false if it can't establish any sort of communication with the server. I'd expect it to timeout and error. Not just pass it on when it receives an error code from the server. What if there's a network problem, for example? Is this a bug in oci_connect or is there a possibility that something in our PHP configuration gives oci_connect a crazily long timeout? If it is a sort of "bug" is there some way I can check to see if the server is up first? Like a ping? (Of course when I did a ping through the command prompt I got a response from Server One and then was told, "it's back now" although I am skeptical about the timing on that.) Anyway, if anyone could shed some light on why oci_connect might run endlessly without failing and how to keep it from doing so I'd be grateful. -- Edit: My code looks like the examples on PHP.net only in some loops. $count = count($servers); for($i = 0; $i < $count; $i++){ if((!isset($connection)) || ($connection == false)){ // Attempt to connect to the oracle database $connection = @oci_connect($servers[$i]["user"], $servers[$i]["pass"], $servers[$i]["conid"]) or ($conn_error = oracle_error()); // Try again if there was a failure if(($connection == false) || (isset($con_error))){ // Three (two more) tries per alternative for($j = $st; $j < $fn; $j++){ // Try again to connect $connection = @oci_connect($servers[$i]["user"], $servers[$i]["pass"], $servers[$i]["conid"]) or ($conn_error = oracle_error()); } // for($j = 2; $j < 4; $j++) } // if($connection == false) } // if(!isset($connection) || ($connection == false)) } // for($i = 0; $i < $count; $i++)

    Read the article

  • pass arraylist bean from android to webservice php

    - by user1547432
    i'm newbie in android with web service i'm trying to pass arraylist from android to webservice php server here's my bean code: public class ExpressionBean { public static final String EXPRESSION_ID = "expressionID"; public static final String EXPRESSION_TEXT = "expressionText"; public static final String ANS_TEXT1 = "ansText1"; public static final String ANS_TEXT2 = "ansText2"; public static final String ASSESSEE_ANSWER = "assesseeAnswer"; private String expressionID; private String expressionText; private String ansText1; private String ansText2; private String assesseeAnswer; public String getExpressionID() { return expressionID; } public void setExpressionID(String expressionID) { this.expressionID = expressionID; } public String getExpressionText() { return expressionText; } public void setExpressionText(String expressionText) { this.expressionText = expressionText; } public String getAnsText1() { return ansText1; } public void setAnsText1(String ansText1) { this.ansText1 = ansText1; } public String getAnsText2() { return ansText2; } public void setAnsText2(String ansText2) { this.ansText2 = ansText2; } public String getAssesseeAnswer() { return assesseeAnswer; } public void setAssesseeAnswer(String assesseeAnswer) { this.assesseeAnswer = assesseeAnswer; } } and here's my doInBackround on async task : protected Boolean doInBackground(Void... params) { // TODO: attempt authentication against a network service. boolean result = false; // test = new TestBean(); // int resultTest = 0; UserFunctions userFunction = new UserFunctions(); Log.d(TAG, "UID : " + mEmail); // Log.d(TAG, "resultTest : " + resultTest); JSONObject jsonTest = userFunction.storeTest(mEmail); Log.d(TAG, "After JSON TEST "); try { if (jsonTest.getString(KEY_SUCCESS) != null) { String res = jsonTest.getString(KEY_SUCCESS); JSONObject testData = jsonTest.getJSONObject(TAG_TEST); test = new TestBean(); test.setTestid(testData.getInt(TAG_TEST_ID)); test.setUid(testData.getInt(TAG_UID)); JSONArray list = new JSONArray(); String list2; for (int position = 0; position < expressionList.size(); position++) { Gson gson = new Gson(); list.put(gson.toJson(expressionList.get(position))); } Log.d(TAG, "JSONArray list coy : " + list); UserFunctions uf = new UserFunctions(); JSONObject jsonHistoryList = new JSONObject(); jsonHistoryList = uf.storeHistoryList(list.toString()); if (Integer.parseInt(res) == 1) { result = true; finish(); } else { result = false; } } } catch (JSONException e) { e.printStackTrace(); return false; } // TODO: register the new account here. return result; } and here's storeHistoryList Method : public JSONObject storeHistoryList(String list) { // Building Parameters List<NameValuePair> params = new ArrayList<NameValuePair>(); params.add(new BasicNameValuePair("tag", storeHistory_tag)); params.add(new BasicNameValuePair("list", list)); JSONObject json = jsonParser.getJSONFromUrl(URL, params); return json; } i want to pass list to web service list is an arraylist ExpressionBean i used gson for convert bean to json but when i execute, the log said "error parsing data... jsonarray cannot be converted to jsonobject what i must to do? thanks

    Read the article

  • Loading jQuery Consistently in a .NET Web App

    - by Rick Strahl
    One thing that frequently comes up in discussions when using jQuery is how to best load the jQuery library (as well as other commonly used and updated libraries) in a Web application. Specifically the issue is the one of versioning and making sure that you can easily update and switch versions of script files with application wide settings in one place and having your script usage reflect those settings in the entire application on all pages that use the script. Although I use jQuery as an example here, the same concepts can be applied to any script library - for example in my Web libraries I use the same approach for jQuery.ui and my own internal jQuery support library. The concepts used here can be applied both in WebForms and MVC. Loading jQuery Properly From CDN Before we look at a generic way to load jQuery via some server logic, let me first point out my preferred way to embed jQuery into the page. I use the Google CDN to load jQuery and then use a fallback URL to handle the offline or no Internet connection scenario. Why use a CDN? CDN links tend to be loaded more quickly since they are very likely to be cached in user's browsers already as jQuery CDN is used by many, many sites on the Web. Using a CDN also removes load from your Web server and puts the load bearing on the CDN provider - in this case Google - rather than on your Web site. On the downside, CDN links gives the provider (Google, Microsoft) yet another way to track users through their Web usage. Here's how I use jQuery CDN plus a fallback link on my WebLog for example: <!DOCTYPE HTML> <html> <head> <script src="//ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.min.js"></script> <script> if (typeof (jQuery) == 'undefined') document.write(unescape("%3Cscript " + "src='/Weblog/wwSC.axd?r=Westwind.Web.Controls.Resources.jquery.js' %3E%3C/script%3E")); </script> <title>Rick Strahl's Web Log</title> ... </head>   You can see that the CDN is referenced first, followed by a small script block that checks to see whether jQuery was loaded (jQuery object exists). If it didn't load another script reference is added to the document dynamically pointing to a backup URL. In this case my backup URL points at a WebResource in my Westwind.Web  assembly, but the URL can also be local script like src="/scripts/jquery.min.js". Important: Use the proper Protocol/Scheme for  for CDN Urls [updated based on comments] If you're using a CDN to load an external script resource you should always make sure that the script is loaded with the same protocol as the parent page to avoid mixed content warnings by the browser. You don't want to load a script link to an http:// resource when you're on an https:// page. The easiest way to use this is by using a protocol relative URL: <script src="//ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.min.js"></script> which is an easy way to load resources from other domains. This URL syntax will automatically use the parent page's protocol (or more correctly scheme). As long as the remote domains support both http:// and https:// access this should work. BTW this also works in CSS (with some limitations) and links. BTW, I didn't know about this until it was pointed out in the comments. This is a very useful feature for many things - ah the benefits of my blog to myself :-) Version Numbers When you use a CDN you notice that you have to reference a specific version of jQuery. When using local files you may not have to do this as you can rename your private copy of jQuery.js, but for CDN the references are always versioned. The version number is of course very important to ensure you getting the version you have tested with, but it's also important to the provider because it ensures that cached content is always correct. If an existing file was updated the updates might take a very long time to get past the locally cached content and won't refresh properly. The version number ensures you get the right version and not some cached content that has been changed but not updated in your cache. On the other hand version numbers also mean that once you decide to use a new version of the script you now have to change all your script references in your pages. Depending on whether you use some sort of master/layout page or not this may or may not be easy in your application. Even if you do use master/layout pages, chances are that you probably have a few of them and at the very least all of those have to be updated for the scripts. If you use individual pages for all content this issue then spreads to all of your pages. Search and Replace in Files will do the trick, but it's still something that's easy to forget and worry about. Personaly I think it makes sense to have a single place where you can specify common script libraries that you want to load and more importantly which versions thereof and where they are loaded from. Loading Scripts via Server Code Script loading has always been important to me and as long as I can remember I've always built some custom script loading routines into my Web frameworks. WebForms makes this fairly easy because it has a reasonably useful script manager (ClientScriptManager and the ScriptManager) which allow injecting script into the page easily from anywhere in the Page cycle. What's nice about these components is that they allow scripts to be injected by controls so components can wrap up complex script/resource dependencies more easily without having to require long lists of CSS/Scripts/Image includes. In MVC or pure script driven applications like Razor WebPages  the process is more raw, requiring you to embed script references in the right place. But its also more immediate - it lets you know exactly which versions of scripts to use because you have to manually embed them. In WebForms with different controls loading resources this often can get confusing because it's quite possible to load multiple versions of the same script library into a page, the results of which are less than optimal… In this post I look a simple routine that embeds jQuery into the page based on a few application wide configuration settings. It returns only a string of the script tags that can be manually embedded into a Page template. It's a small function that merely a string of the script tags shown at the begging of this post along with some options on how that string is comprised. You'll be able to specify in one place which version loads and then all places where the help function is used will automatically reflect this selection. Options allow specification of the jQuery CDN Url, the fallback Url and where jQuery should be loaded from (script folder, Resource or CDN in my case). While this is specific to jQuery you can apply this to other resources as well. For example I use a similar approach with jQuery.ui as well using practically the same semantics. Providing Resources in ControlResources In my Westwind.Web Web utility library I have a class called ControlResources which is responsible for holding resource Urls, resource IDs and string contants that reference those resource IDs. The library also provides a few helper methods for loading common scriptscripts into a Web page. There are specific versions for WebForms which use the ClientScriptManager/ScriptManager and script link methods that can be used in any .NET technology that can embed an expression into the output template (or code for that matter). The ControlResources class contains mostly static content - references to resources mostly. But it also contains a few static properties that configure script loading: A Script LoadMode (CDN, Resource, or script url) A default CDN Url A fallback url They are  static properties in the ControlResources class: public class ControlResources { /// <summary> /// Determines what location jQuery is loaded from /// </summary> public static JQueryLoadModes jQueryLoadMode = JQueryLoadModes.ContentDeliveryNetwork; /// <summary> /// jQuery CDN Url on Google /// </summary> public static string jQueryCdnUrl = "//ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.min.js"; /// <summary> /// jQuery CDN Url on Google /// </summary> public static string jQueryUiCdnUrl = "//ajax.googleapis.com/ajax/libs/jqueryui/1.8.16/jquery-ui.min.js"; /// <summary> /// jQuery UI fallback Url if CDN is unavailable or WebResource is used /// Note: The file needs to exist and hold the minimized version of jQuery ui /// </summary> public static string jQueryUiLocalFallbackUrl = "~/scripts/jquery-ui.min.js"; } These static properties are fixed values that can be changed at application startup to reflect your preferences. Since they're static they are application wide settings and respected across the entire Web application running. It's best to set these default in Application_Init or similar startup code if you need to change them for your application: protected void Application_Start(object sender, EventArgs e) { // Force jQuery to be loaded off Google Content Network ControlResources.jQueryLoadMode = JQueryLoadModes.ContentDeliveryNetwork; // Allow overriding of the Cdn url ControlResources.jQueryCdnUrl = "http://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js"; // Route to our own internal handler App.OnApplicationStart(); } With these basic settings in place you can then embed expressions into a page easily. In WebForms use: <!DOCTYPE html> <html> <head runat="server"> <%= ControlResources.jQueryLink() %> <script src="scripts/ww.jquery.min.js"></script> </head> In Razor use: <!DOCTYPE html> <html> <head> @Html.Raw(ControlResources.jQueryLink()) <script src="scripts/ww.jquery.min.js"></script> </head> Note that in Razor you need to use @Html.Raw() to force the string NOT to escape. Razor by default escapes string results and this ensures that the HTML content is properly expanded as raw HTML text. Both the WebForms and Razor output produce: <!DOCTYPE html> <html> <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js" type="text/javascript"></script> <script type="text/javascript"> if (typeof (jQuery) == 'undefined') document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/WebResource.axd?d=-b6oWzgbpGb8uTaHDrCMv59VSmGhilZP5_T_B8anpGx7X-PmW_1eu1KoHDvox-XHqA1EEb-Tl2YAP3bBeebGN65tv-7-yAimtG4ZnoWH633pExpJor8Qp1aKbk-KQWSoNfRC7rQJHXVP4tC0reYzVw2&t=634535391996872492' type='text/javascript'%3E%3C/script%3E"));</script> <script src="scripts/ww.jquery.min.js"></script> </head> which produces the desired effect for both CDN load and fallback URL. The implementation of jQueryLink is pretty basic of course: /// <summary> /// Inserts a script link to load jQuery into the page based on the jQueryLoadModes settings /// of this class. Default load is by CDN plus WebResource fallback /// </summary> /// <param name="url"> /// An optional explicit URL to load jQuery from. Url is resolved. /// When specified no fallback is applied /// </param> /// <returns>full script tag and fallback script for jQuery to load</returns> public static string jQueryLink(JQueryLoadModes jQueryLoadMode = JQueryLoadModes.Default, string url = null) { string jQueryUrl = string.Empty; string fallbackScript = string.Empty; if (jQueryLoadMode == JQueryLoadModes.Default) jQueryLoadMode = ControlResources.jQueryLoadMode; if (!string.IsNullOrEmpty(url)) jQueryUrl = WebUtils.ResolveUrl(url); else if (jQueryLoadMode == JQueryLoadModes.WebResource) { Page page = new Page(); jQueryUrl = page.ClientScript.GetWebResourceUrl(typeof(ControlResources), ControlResources.JQUERY_SCRIPT_RESOURCE); } else if (jQueryLoadMode == JQueryLoadModes.ContentDeliveryNetwork) { jQueryUrl = ControlResources.jQueryCdnUrl; if (!string.IsNullOrEmpty(jQueryCdnUrl)) { // check if jquery loaded - if it didn't we're not online and use WebResource fallbackScript = @"<script type=""text/javascript"">if (typeof(jQuery) == 'undefined') document.write(unescape(""%3Cscript src='{0}' type='text/javascript'%3E%3C/script%3E""));</script>"; fallbackScript = string.Format(fallbackScript, WebUtils.ResolveUrl(ControlResources.jQueryCdnFallbackUrl)); } } string output = "<script src=\"" + jQueryUrl + "\" type=\"text/javascript\"></script>"; // add in the CDN fallback script code if (!string.IsNullOrEmpty(fallbackScript)) output += "\r\n" + fallbackScript + "\r\n"; return output; } There's one dependency here on WebUtils.ResolveUrl() which resolves Urls without access to a Page/Control (another one of those features that should be in the runtime, not in the WebForms or MVC engine). You can see there's only a little bit of logic in this code that deals with potentially different load modes. I can load scripts from a Url, WebResources or - my preferred way - from CDN. Based on the static settings the scripts to embed are composed to be returned as simple string <script> tag(s). I find this extremely useful especially when I'm not connected to the internet so that I can quickly swap in a local jQuery resource instead of loading from CDN. While CDN loading with the fallback works it can be a bit slow as the CDN is probed first before the fallback kicks in. Switching quickly in one place makes this trivial. It also makes it very easy once a new version of jQuery rolls around to move up to the new version and ensure that all pages are using the new version immediately. I'm not trying to make this out as 'the' definite way to load your resources, but rather provide it here as a pointer so you can maybe apply your own logic to determine where scripts come from and how they load. You could even automate this some more by using configuration settings or reading the locations/preferences out of some sort of data/metadata store that can be dynamically updated instead via recompilation. FWIW, I use a very similar approach for loading jQuery UI and my own ww.jquery library - the same concept can be applied to any kind of script you might be loading from different locations. Hopefully some of you find this a useful addition to your toolset. Resources Google CDN for jQuery Full ControlResources Source Code ControlResource Documentation Westwind.Web NuGet This method is part of the Westwind.Web library of the West Wind Web Toolkit or you can grab the Web library from NuGet and add to your Visual Studio project. This package includes a host of Web related utilities and script support features. © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  jQuery   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How to reduce iOS AVPlayer start delay

    - by Bernt Habermeier
    Note, for the below question: All assets are local on the device -- no network streaming is taking place. The videos contain audio tracks. I'm working on an iOS application that requires playing video files with minimum delay to start the video clip in question. Unfortunately we do not know what specific video clip is next until we actually need to start it up. Specifically: When one video clip is playing, we will know what the next set of (roughly) 10 video clips are, but we don't know which one exactly, until it comes time to 'immediately' play the next clip. What I've done to look at actual start delays is to call addBoundaryTimeObserverForTimes on the video player, with a time period of one millisecond to see when the video actually started to play, and I take the difference of that time stamp with the first place in the code that indicates which asset to start playing. From what I've seen thus-far, I have found that using the combination of AVAsset loading, and then creating an AVPlayerItem from that once it's ready, and then waiting for AVPlayerStatusReadyToPlay before I call play, tends to take between 1 and 3 seconds to start the clip. I've since switched to what I think is roughly equivalent: calling [AVPlayerItem playerItemWithURL:] and waiting for AVPlayerItemStatusReadyToPlay to play. Roughly same performance. One thing I'm observing is that the first AVPlayer item load is slower than the rest. Seems one idea is to pre-flight the AVPlayer with a short / empty asset before trying to play the first video might be of good general practice. [http://stackoverflow.com/questions/900461/slow-start-for-avaudioplayer-the-first-time-a-sound-is-played] I'd love to get the video start times down as much as possible, and have some ideas of things to experiment with, but would like some guidance from anyone that might be able to help. Update: idea 7, below, as-implemented yields switching times of around 500 ms. This is an improvement, but it it'd be nice to get this even faster. Idea 1: Use N AVPlayers (won't work) Using ~ 10 AVPPlayer objects and start-and-pause all ~ 10 clips, and once we know which one we really need, switch to, and un-pause the correct AVPlayer, and start all over again for the next cycle. I don't think this works, because I've read there is roughly a limit of 4 active AVPlayer's in iOS. There was someone asking about this on StackOverflow here, and found out about the 4 AVPlayer limit: fast-switching-between-videos-using-avfoundation Idea 2: Use AVQueuePlayer (won't work) I don't believe that shoving 10 AVPlayerItems into an AVQueuePlayer would pre-load them all for seamless start. AVQueuePlayer is a queue, and I think it really only makes the next video in the queue ready for immediate playback. I don't know which one out of ~10 videos we do want to play back, until it's time to start that one. ios-avplayer-video-preloading Idea 3: Load, Play, and retain AVPlayerItems in background (not 100% sure yet -- but not looking good) I'm looking at if there is any benefit to load and play the first second of each video clip in the background (suppress video and audio output), and keep a reference to each AVPlayerItem, and when we know which item needs to be played for real, swap that one in, and swap the background AVPlayer with the active one. Rinse and Repeat. The theory would be that recently played AVPlayer/AVPlayerItem's may still hold some prepared resources which would make subsequent playback faster. So far, I have not seen benefits from this, but I might not have the AVPlayerLayer setup correctly for the background. I doubt this will really improve things from what I've seen. Idea 4: Use a different file format -- maybe one that is faster to load? I'm currently using .m4v's (video-MPEG4) H.264 format. I have not played around with other formats, but it may well be that some formats are faster to decode / get ready than others. Possible still using video-MPEG4 but with a different codec, or maybe quicktime? Maybe a lossless video format where decoding / setup is faster? Idea 5: Combination of lossless video format + AVQueuePlayer If there is a video format that is fast to load, but maybe where the file size is insane, one idea might be to pre-prepare the first 10 seconds of each video clip with a version that is boated but faster to load, but back that up with an asset that is encoded in H.264. Use an AVQueuePlayer, and add the first 10 seconds in the uncompressed file format, and follow that up with one that is in H.264 which gets up to 10 seconds of prepare/preload time. So I'd get 'the best' of both worlds: fast start times, but also benefits from a more compact format. Idea 6: Use a non-standard AVPlayer / write my own / use someone else's Given my needs, maybe I can't use AVPlayer, but have to resort to AVAssetReader, and decode the first few seconds (possibly write raw file to disk), and when it comes to playback, make use of the raw format to play it back fast. Seems like a huge project to me, and if I go about it in a naive way, it's unclear / unlikely to even work better. Each decoded and uncompressed video frame is 2.25 MB. Naively speaking -- if we go with ~ 30 fps for the video, I'd end up with ~60 MB/s read-from-disk requirement, which is probably impossible / pushing it. Obviously we'd have to do some level of image compression (perhaps native openGL/es compression formats via PVRTC)... but that's kind crazy. Maybe there is a library out there that I can use? Idea 7: Combine everything into a single movie asset, and seekToTime One idea that might be easier than some of the above, is to combine everything into a single movie, and use seekToTime. The thing is that we'd be jumping all around the place. Essentially random access into the movie. I think this may actually work out okay: avplayer-movie-playing-lag-in-ios5 Which approach do you think would be best? So far, I've not made that much progress in terms of reducing the lag.

    Read the article

  • HTG Reviews the CODE Keyboard: Old School Construction Meets Modern Amenities

    - by Jason Fitzpatrick
    There’s nothing quite as satisfying as the smooth and crisp action of a well built keyboard. If you’re tired of  mushy keys and cheap feeling keyboards, a well-constructed mechanical keyboard is a welcome respite from the $10 keyboard that came with your computer. Read on as we put the CODE mechanical keyboard through the paces. What is the CODE Keyboard? The CODE keyboard is a collaboration between manufacturer WASD Keyboards and Jeff Atwood of Coding Horror (the guy behind the Stack Exchange network and Discourse forum software). Atwood’s focus was incorporating the best of traditional mechanical keyboards and the best of modern keyboard usability improvements. In his own words: The world is awash in terrible, crappy, no name how-cheap-can-we-make-it keyboards. There are a few dozen better mechanical keyboard options out there. I’ve owned and used at least six different expensive mechanical keyboards, but I wasn’t satisfied with any of them, either: they didn’t have backlighting, were ugly, had terrible design, or were missing basic functions like media keys. That’s why I originally contacted Weyman Kwong of WASD Keyboards way back in early 2012. I told him that the state of keyboards was unacceptable to me as a geek, and I proposed a partnership wherein I was willing to work with him to do whatever it takes to produce a truly great mechanical keyboard. Even the ardent skeptic who questions whether Atwood has indeed created a truly great mechanical keyboard certainly can’t argue with the position he starts from: there are so many agonizingly crappy keyboards out there. Even worse, in our opinion, is that unless you’re a typist of a certain vintage there’s a good chance you’ve never actually typed on a really nice keyboard. Those that didn’t start using computers until the mid-to-late 1990s most likely have always typed on modern mushy-key keyboards and never known the joy of typing on a really responsive and crisp mechanical keyboard. Is our preference for and love of mechanical keyboards shining through here? Good. We’re not even going to try and hide it. So where does the CODE keyboard stack up in pantheon of keyboards? Read on as we walk you through the simple setup and our experience using the CODE. Setting Up the CODE Keyboard Although the setup of the CODE keyboard is essentially plug and play, there are two distinct setup steps that you likely haven’t had to perform on a previous keyboard. Both highlight the degree of care put into the keyboard and the amount of customization available. Inside the box you’ll find the keyboard, a micro USB cable, a USB-to-PS2 adapter, and a tool which you may be unfamiliar with: a key puller. We’ll return to the key puller in a moment. Unlike the majority of keyboards on the market, the cord isn’t permanently affixed to the keyboard. What does this mean for you? Aside from the obvious need to plug it in yourself, it makes it dead simple to repair your own keyboard cord if it gets attacked by a pet, mangled in a mechanism on your desk, or otherwise damaged. It also makes it easy to take advantage of the cable routing channels in on the underside of the keyboard to  route your cable exactly where you want it. While we’re staring at the underside of the keyboard, check out those beefy rubber feet. By peripherals standards they’re huge (and there is six instead of the usual four). Once you plunk the keyboard down where you want it, it might as well be glued down the rubber feet work so well. After you’ve secured the cable and adjusted it to your liking, there is one more task  before plug the keyboard into the computer. On the bottom left-hand side of the keyboard, you’ll find a small recess in the plastic with some dip switches inside: The dip switches are there to switch hardware functions for various operating systems, keyboard layouts, and to enable/disable function keys. By toggling the dip switches you can change the keyboard from QWERTY mode to Dvorak mode and Colemak mode, the two most popular alternative keyboard configurations. You can also use the switches to enable Mac-functionality (for Command/Option keys). One of our favorite little toggles is the SW3 dip switch: you can disable the Caps Lock key; goodbye accidentally pressing Caps when you mean to press Shift. You can review the entire dip switch configuration chart here. The quick-start for Windows users is simple: double check that all the switches are in the off position (as seen in the photo above) and then simply toggle SW6 on to enable the media and backlighting function keys (this turns the menu key on the keyboard into a function key as typically found on laptop keyboards). After adjusting the dip switches to your liking, plug the keyboard into an open USB port on your computer (or into your PS/2 port using the included adapter). Design, Layout, and Backlighting The CODE keyboard comes in two flavors, a traditional 87-key layout (no number pad) and a traditional 104-key layout (number pad on the right hand side). We identify the layout as traditional because, despite some modern trapping and sneaky shortcuts, the actual form factor of the keyboard from the shape of the keys to the spacing and position is as classic as it comes. You won’t have to learn a new keyboard layout and spend weeks conditioning yourself to a smaller than normal backspace key or a PgUp/PgDn pair in an unconventional location. Just because the keyboard is very conventional in layout, however, doesn’t mean you’ll be missing modern amenities like media-control keys. The following additional functions are hidden in the F11, F12, Pause button, and the 2×6 grid formed by the Insert and Delete rows: keyboard illumination brightness, keyboard illumination on/off, mute, and then the typical play/pause, forward/backward, stop, and volume +/- in Insert and Delete rows, respectively. While we weren’t sure what we’d think of the function-key system at first (especially after retiring a Microsoft Sidewinder keyboard with a huge and easily accessible volume knob on it), it took less than a day for us to adapt to using the Fn key, located next to the right Ctrl key, to adjust our media playback on the fly. Keyboard backlighting is a largely hit-or-miss undertaking but the CODE keyboard nails it. Not only does it have pleasant and easily adjustable through-the-keys lighting but the key switches the keys themselves are attached to are mounted to a steel plate with white paint. Enough of the light reflects off the interior cavity of the keys and then diffuses across the white plate to provide nice even illumination in between the keys. Highlighting the steel plate beneath the keys brings us to the actual construction of the keyboard. It’s rock solid. The 87-key model, the one we tested, is 2.0 pounds. The 104-key is nearly a half pound heavier at 2.42 pounds. Between the steel plate, the extra-thick PCB board beneath the steel plate, and the thick ABS plastic housing, the keyboard has very solid feel to it. Combine that heft with the previously mentioned thick rubber feet and you have a tank-like keyboard that won’t budge a millimeter during normal use. Examining The Keys This is the section of the review the hardcore typists and keyboard ninjas have been waiting for. We’ve looked at the layout of the keyboard, we’ve looked at the general construction of it, but what about the actual keys? There are a wide variety of keyboard construction techniques but the vast majority of modern keyboards use a rubber-dome construction. The key is floated in a plastic frame over a rubber membrane that has a little rubber dome for each key. The press of the physical key compresses the rubber dome downwards and a little bit of conductive material on the inside of the dome’s apex connects with the circuit board. Despite the near ubiquity of the design, many people dislike it. The principal complaint is that dome keyboards require a complete compression to register a keystroke; keyboard designers and enthusiasts refer to this as “bottoming out”. In other words, the register the “b” key, you need to completely press that key down. As such it slows you down and requires additional pressure and movement that, over the course of tens of thousands of keystrokes, adds up to a whole lot of wasted time and fatigue. The CODE keyboard features key switches manufactured by Cherry, a company that has manufactured key switches since the 1960s. Specifically the CODE features Cherry MX Clear switches. These switches feature the same classic design of the other Cherry switches (such as the MX Blue and Brown switch lineups) but they are significantly quieter (yes this is a mechanical keyboard, but no, your neighbors won’t think you’re firing off a machine gun) as they lack the audible click found in most Cherry switches. This isn’t to say that they keyboard doesn’t have a nice audible key press sound when the key is fully depressed, but that the key mechanism isn’t doesn’t create a loud click sound when triggered. One of the great features of the Cherry MX clear is a tactile “bump” that indicates the key has been compressed enough to register the stroke. For touch typists the very subtle tactile feedback is a great indicator that you can move on to the next stroke and provides a welcome speed boost. Even if you’re not trying to break any word-per-minute records, that little bump when pressing the key is satisfying. The Cherry key switches, in addition to providing a much more pleasant typing experience, are also significantly more durable than dome-style key switch. Rubber dome switch membrane keyboards are typically rated for 5-10 million contacts whereas the Cherry mechanical switches are rated for 50 million contacts. You’d have to write the next War and Peace  and follow that up with A Tale of Two Cities: Zombie Edition, and then turn around and transcribe them both into a dozen different languages to even begin putting a tiny dent in the lifecycle of this keyboard. So what do the switches look like under the classicly styled keys? You can take a look yourself with the included key puller. Slide the loop between the keys and then gently beneath the key you wish to remove: Wiggle the key puller gently back and forth while exerting a gentle upward pressure to pop the key off; You can repeat the process for every key, if you ever find yourself needing to extract piles of cat hair, Cheeto dust, or other foreign objects from your keyboard. There it is, the naked switch, the source of that wonderful crisp action with the tactile bump on each keystroke. The last feature worthy of a mention is the N-key rollover functionality of the keyboard. This is a feature you simply won’t find on non-mechanical keyboards and even gaming keyboards typically only have any sort of key roller on the high-frequency keys like WASD. So what is N-key rollover and why do you care? On a typical mass-produced rubber-dome keyboard you cannot simultaneously press more than two keys as the third one doesn’t register. PS/2 keyboards allow for unlimited rollover (in other words you can’t out type the keyboard as all of your keystrokes, no matter how fast, will register); if you use the CODE keyboard with the PS/2 adapter you gain this ability. If you don’t use the PS/2 adapter and use the native USB, you still get 6-key rollover (and the CTRL, ALT, and SHIFT don’t count towards the 6) so realistically you still won’t be able to out type the computer as even the more finger twisting keyboard combos and high speed typing will still fall well within the 6-key rollover. The rollover absolutely doesn’t matter if you’re a slow hunt-and-peck typist, but if you’ve read this far into a keyboard review there’s a good chance that you’re a serious typist and that kind of quality construction and high-number key rollover is a fantastic feature.  The Good, The Bad, and the Verdict We’ve put the CODE keyboard through the paces, we’ve played games with it, typed articles with it, left lengthy comments on Reddit, and otherwise used and abused it like we would any other keyboard. The Good: The construction is rock solid. In an emergency, we’re confident we could use the keyboard as a blunt weapon (and then resume using it later in the day with no ill effect on the keyboard). The Cherry switches are an absolute pleasure to type on; the Clear variety found in the CODE keyboard offer a really nice middle-ground between the gun-shot clack of a louder mechanical switch and the quietness of a lesser-quality dome keyboard without sacrificing quality. Touch typists will love the subtle tactile bump feedback. Dip switch system makes it very easy for users on different systems and with different keyboard layout needs to switch between operating system and keyboard layouts. If you’re investing a chunk of change in a keyboard it’s nice to know you can take it with you to a different operating system or “upgrade” it to a new layout if you decide to take up Dvorak-style typing. The backlighting is perfect. You can adjust it from a barely-visible glow to a blazing light-up-the-room brightness. Whatever your intesity preference, the white-coated steel backplate does a great job diffusing the light between the keys. You can easily remove the keys for cleaning (or to rearrange the letters to support a new keyboard layout). The weight of the unit combined with the extra thick rubber feet keep it planted exactly where you place it on the desk. The Bad: While you’re getting your money’s worth, the $150 price tag is a shock when compared to the $20-60 price tags you find on lower-end keyboards. People used to large dedicated media keys independent of the traditional key layout (such as the large buttons and volume controls found on many modern keyboards) might be off put by the Fn-key style media controls on the CODE. The Verdict: The keyboard is clearly and heavily influenced by the needs of serious typists. Whether you’re a programmer, transcriptionist, or just somebody that wants to leave the lengthiest article comments the Internet has ever seen, the CODE keyboard offers a rock solid typing experience. Yes, $150 isn’t pocket change, but the quality of the CODE keyboard is so high and the typing experience is so enjoyable, you’re easily getting ten times the value you’d get out of purchasing a lesser keyboard. Even compared to other mechanical keyboards on the market, like the Das Keyboard, you’re still getting more for your money as other mechanical keyboards don’t come with the lovely-to-type-on Cherry MX Clear switches, back lighting, and hardware-based operating system keyboard layout switching. If it’s in your budget to upgrade your keyboard (especially if you’ve been slogging along with a low-end rubber-dome keyboard) there’s no good reason to not pickup a CODE keyboard. Key animation courtesy of Geekhack.org user Lethal Squirrel.       

    Read the article

  • Use IIS Application Initialization for keeping ASP.NET Apps alive

    - by Rick Strahl
    I've been working quite a bit with Windows Services in the recent months, and well, it turns out that Windows Services are quite a bear to debug, deploy, update and maintain. The process of getting services set up,  debugged and updated is a major chore that has to be extensively documented and or automated specifically. On most projects when a service is built, people end up scrambling for the right 'process' to use for administration. Web app deployment and maintenance on the other hand are common and well understood today, as we are constantly dealing with Web apps. There's plenty of infrastructure and tooling built into Web Tools like Visual Studio to facilitate the process. By comparison Windows Services or anything self-hosted for that matter seems convoluted.In fact, in a recent blog post I mentioned that on a recent project I'd been using self-hosting for SignalR inside of a Windows service, because the application is in fact a 'service' that also needs to send out lots of messages via SignalR. But the reality is that it could just as well be an IIS application with a service component that runs in the background. Either way you look at it, it's either a Windows Service with a built in Web Server, or an IIS application running a Service application, neither of which follows the standard Service or Web App template.Personally I much prefer Web applications. Running inside of IIS I get all the benefits of the IIS platform including service lifetime management (crash and restart), controlled shutdowns, the whole security infrastructure including easy certificate support, hot-swapping of code and the the ability to publish directly to IIS from within Visual Studio with ease.Because of these benefits we set out to move from the self hosted service into an ASP.NET Web app instead.The Missing Link for ASP.NET as a Service: Auto-LoadingI've had moments in the past where I wanted to run a 'service like' application in ASP.NET because when you think about it, it's so much easier to control a Web application remotely. Services are locked into start/stop operations, but if you host inside of a Web app you can write your own ticket and control it from anywhere. In fact nearly 10 years ago I built a background scheduling application that ran inside of ASP.NET and it worked great and it's still running doing its job today.The tricky part for running an app as a service inside of IIS then and now, is how to get IIS and ASP.NET launched so your 'service' stays alive even after an Application Pool reset. 7 years ago I faked it by using a web monitor (my own West Wind Web Monitor app) I was running anyway to monitor my various web sites for uptime, and having the monitor ping my 'service' every 20 seconds to effectively keep ASP.NET alive or fire it back up after a reload. I used a simple scheduler class that also includes some logic for 'self-reloading'. Hacky for sure, but it worked reliably.Luckily today it's much easier and more integrated to get IIS to launch ASP.NET as soon as an Application Pool is started by using the Application Initialization Module. The Application Initialization Module basically allows you to turn on Preloading on the Application Pool and the Site/IIS App, which essentially fires a request through the IIS pipeline as soon as the Application Pool has been launched. This means that effectively your ASP.NET app becomes active immediately, Application_Start is fired making sure your app stays up and running at all times. All the other features like Application Pool recycling and auto-shutdown after idle time still work, but IIS will then always immediately re-launch the application.Getting started with Application InitializationAs of IIS 8 Application Initialization is part of the IIS feature set. For IIS 7 and 7.5 there's a separate download available via Web Platform Installer. Using IIS 8 Application Initialization is an optional install component in Windows or the Windows Server Role Manager: This is an optional component so make sure you explicitly select it.IIS Configuration for Application InitializationInitialization needs to be applied on the Application Pool as well as the IIS Application level. As of IIS 8 these settings can be made through the IIS Administration console.Start with the Application Pool:Here you need to set both the Start Automatically which is always set, and the StartMode which should be set to AlwaysRunning. Both have to be set - the Start Automatically flag is set true by default and controls the starting of the application pool itself while Always Running flag is required in order to launch the application. Without the latter flag set the site settings have no effect.Now on the Site/Application level you can specify whether the site should pre load: Set the Preload Enabled flag to true.At this point ASP.NET apps should auto-load. This is all that's needed to pre-load the site if all you want is to get your site launched automatically.If you want a little more control over the load process you can add a few more settings to your web.config file that allow you to show a static page while the App is starting up. This can be useful if startup is really slow, so rather than displaying blank screen while the user is fiddling their thumbs you can display a static HTML page instead: <system.webServer> <applicationInitialization remapManagedRequestsTo="Startup.htm" skipManagedModules="true"> <add initializationPage="ping.ashx" /> </applicationInitialization> </system.webServer>This allows you to specify a page to execute in a dry run. IIS basically fakes request and pushes it directly into the IIS pipeline without hitting the network. You specify a page and IIS will fake a request to that page in this case ping.ashx which just returns a simple OK string - ie. a fast pipeline request. This request is run immediately after Application Pool restart, and while this request is running and your app is warming up, IIS can display an alternate static page - Startup.htm above. So instead of showing users an empty loading page when clicking a link on your site you can optionally show some sort of static status page that says, "we'll be right back".  I'm not sure if that's such a brilliant idea since this can be pretty disruptive in some cases. Personally I think I prefer letting people wait, but at least get the response they were supposed to get back rather than a random page. But it's there if you need it.Note that the web.config stuff is optional. If you don't provide it IIS hits the default site link (/) and even if there's no matching request at the end of that request it'll still fire the request through the IIS pipeline. Ideally though you want to make sure that an ASP.NET endpoint is hit either with your default page, or by specify the initializationPage to ensure ASP.NET actually gets hit since it's possible for IIS fire unmanaged requests only for static pages (depending how your pipeline is configured).What about AppDomain Restarts?In addition to full Worker Process recycles at the IIS level, ASP.NET also has to deal with AppDomain shutdowns which can occur for a variety of reasons:Files are updated in the BIN folderWeb Deploy to your siteweb.config is changedHard application crashThese operations don't cause the worker process to restart, but they do cause ASP.NET to unload the current AppDomain and start up a new one. Because the features above only apply to Application Pool restarts, AppDomain restarts could also cause your 'ASP.NET service' to stop processing in the background.In order to keep the app running on AppDomain recycles, you can resort to a simple ping in the Application_End event:protected void Application_End() { var client = new WebClient(); var url = App.AdminConfiguration.MonitorHostUrl + "ping.aspx"; client.DownloadString(url); Trace.WriteLine("Application Shut Down Ping: " + url); }which fires any ASP.NET url to the current site at the very end of the pipeline shutdown which in turn ensures that the site immediately starts back up.Manual Configuration in ApplicationHost.configThe above UI corresponds to the following ApplicationHost.config settings. If you're using IIS 7, there's no UI for these flags so you'll have to manually edit them.When you install the Application Initialization component into IIS it should auto-configure the module into ApplicationHost.config. Unfortunately for me, with Mr. Murphy in his best form for me, the module registration did not occur and I had to manually add it.<globalModules> <add name="ApplicationInitializationModule" image="%windir%\System32\inetsrv\warmup.dll" /> </globalModules>Most likely you won't need ever need to add this, but if things are not working it's worth to check if the module is actually registered.Next you need to configure the ApplicationPool and the Web site. The following are the two relevant entries in ApplicationHost.config.<system.applicationHost> <applicationPools> <add name="West Wind West Wind Web Connection" autoStart="true" startMode="AlwaysRunning" managedRuntimeVersion="v4.0" managedPipelineMode="Integrated"> <processModel identityType="LocalSystem" setProfileEnvironment="true" /> </add> </applicationPools> <sites> <site name="Default Web Site" id="1"> <application path="/MPress.Workflow.WebQueueMessageManager" applicationPool="West Wind West Wind Web Connection" preloadEnabled="true"> <virtualDirectory path="/" physicalPath="C:\Clients\…" /> </application> </site> </sites> </system.applicationHost>On the Application Pool make sure to set the autoStart and startMode flags to true and AlwaysRunning respectively. On the site make sure to set the preloadEnabled flag to true.And that's all you should need. You can still set the web.config settings described above as well.ASP.NET as a Service?In the particular application I'm working on currently, we have a queue manager that runs as standalone service that polls a database queue and picks out jobs and processes them on several threads. The service can spin up any number of threads and keep these threads alive in the background while IIS is running doing its own thing. These threads are newly created threads, so they sit completely outside of the IIS thread pool. In order for this service to work all it needs is a long running reference that keeps it alive for the life time of the application.In this particular app there are two components that run in the background on their own threads: A scheduler that runs various scheduled tasks and handles things like picking up emails to send out outside of IIS's scope and the QueueManager. Here's what this looks like in global.asax:public class Global : System.Web.HttpApplication { private static ApplicationScheduler scheduler; private static ServiceLauncher launcher; protected void Application_Start(object sender, EventArgs e) { // Pings the service and ensures it stays alive scheduler = new ApplicationScheduler() { CheckFrequency = 600000 }; scheduler.Start(); launcher = new ServiceLauncher(); launcher.Start(); // register so shutdown is controlled HostingEnvironment.RegisterObject(launcher); }}By keeping these objects around as static instances that are set only once on startup, they survive the lifetime of the application. The code in these classes is essentially unchanged from the Windows Service code except that I could remove the various overrides required for the Windows Service interface (OnStart,OnStop,OnResume etc.). Otherwise the behavior and operation is very similar.In this application ASP.NET serves two purposes: It acts as the host for SignalR and provides the administration interface which allows remote management of the 'service'. I can start and stop the service remotely by shutting down the ApplicationScheduler very easily. I can also very easily feed stats from the queue out directly via a couple of Web requests or (as we do now) through the SignalR service.Registering a Background Object with ASP.NETNotice also the use of the HostingEnvironment.RegisterObject(). This function registers an object with ASP.NET to let it know that it's a background task that should be notified if the AppDomain shuts down. RegisterObject() requires an interface with a Stop() method that's fired and allows your code to respond to a shutdown request. Here's what the IRegisteredObject::Stop() method looks like on the launcher:public void Stop(bool immediate = false) { LogManager.Current.LogInfo("QueueManager Controller Stopped."); Controller.StopProcessing(); Controller.Dispose(); Thread.Sleep(1500); // give background threads some time HostingEnvironment.UnregisterObject(this); }Implementing IRegisterObject should help with reliability on AppDomain shutdowns. Thanks to Justin Van Patten for pointing this out to me on Twitter.RegisterObject() is not required but I would highly recommend implementing it on whatever object controls your background processing to all clean shutdowns when the AppDomain shuts down.Testing it outI'm still in the testing phase with this particular service to see if there are any side effects. But so far it doesn't look like it. With about 50 lines of code I was able to replace the Windows service startup to Web start up - everything else just worked as is. An honorable mention goes to SignalR 2.0's oWin hosting, because with the new oWin based hosting no code changes at all were required, merely a couple of configuration file settings and an assembly directive needed, to point at the SignalR startup class. Sweet!It also seems like SignalR is noticeably faster running inside of IIS compared to self-host. Startup feels faster because of the preload.Starting and Stopping the 'Service'Because the application is running as a Web Server, it's easy to have a Web interface for starting and stopping the services running inside of the service. For our queue manager the SignalR service and front monitoring app has a play and stop button for toggling the queue.If you want more administrative control and have it work more like a Windows Service you can also stop the application pool explicitly from the command line which would be equivalent to stopping and restarting a service.To start and stop from the command line you can use the IIS appCmd tool. To stop:> %windir%\system32\inetsrv\appcmd stop apppool /apppool.name:"Weblog"and to start> %windir%\system32\inetsrv\appcmd start apppool /apppool.name:"Weblog"Note that when you explicitly force the AppPool to stop running either in the UI (on the ApplicationPools page use Start/Stop) or via command line tools, the application pool will not auto-restart immediately. You have to manually start it back up.What's not to like?There are certainly a lot of benefits to running a background service in IIS, but… ASP.NET applications do have more overhead in terms of memory footprint and startup time is a little slower, but generally for server applications this is not a big deal. If the application is stable the service should fire up and stay running indefinitely. A lot of times this kind of service interface can simply be attached to an existing Web application, or if scalability requires be offloaded to its own Web server.Easier to work withBut the ultimate benefit here is that it's much easier to work with a Web app as opposed to a service. While developing I can simply turn off the auto-launch features and launch the service on demand through IIS simply by hitting a page on the site. If I want to shut down an IISRESET -stop will shut down the service easily enough. I can then attach a debugger anywhere I want and this works like any other ASP.NET application. Yes you end up on a background thread for debugging but Visual Studio handles that just fine and if you stay on a single thread this is no different than debugging any other code.SummaryUsing ASP.NET to run background service operations is probably not a super common scenario, but it probably should be something that is considered carefully when building services. Many applications have service like features and with the auto-start functionality of the Application Initialization module, it's easy to build this functionality into ASP.NET. Especially when combined with the notification features of SignalR it becomes very, very easy to create rich services that can also communicate their status easily to the outside world.Whether it's existing applications that need some background processing for scheduling related tasks, or whether you just create a separate site altogether just to host your service it's easy to do and you can leverage the same tool chain you're already using for other Web projects. If you have lots of service projects it's worth considering… give it some thought…© Rick Strahl, West Wind Technologies, 2005-2013Posted in ASP.NET  SignalR  IIS   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Help with Design for Vacation Tracking System (C#/.NET/Access/WebServices/SOA/Excel) [closed]

    - by Aaronaught
    I have been tasked with developing a system for tracking our company's paid time-off (vacation, sick days, etc.) At the moment we are using an Excel spreadsheet on a shared network drive, and it works pretty well, but we are concerned that we won't be able to "trust" employees forever and sometimes we run into locking issues when two people try to open the spreadsheet at once. So we are trying to build something a little more robust. I would like some input on this design in terms of maintainability, scalability, extensibility, etc. It's a pretty simple workflow we need to represent right now: I started with a basic MS Access schema like this: Employees (EmpID int, EmpName varchar(50), AllowedDays int) Vacations (VacationID int, EmpID int, BeginDate datetime, EndDate datetime) But we don't want to spend a lot of time building a schema and database like this and have to change it later, so I think I am going to go with something that will be easier to expand through configuration. Right now the vacation table has this schema: Vacations (VacationID int, PropName varchar(50), PropValue varchar(50)) And the table will be populated with data like this: VacationID | PropName | PropValue -----------+--------------+------------------ 1 | EmpID | 4 1 | EmpName | James Jones 1 | Reason | Vacation 1 | BeginDate | 2/24/2010 1 | EndDate | 2/30/2010 1 | Destination | Spectate Swamp 2 | ... | ... I think this is a pretty good, extensible design, we can easily add new properties to the vacation like the destination or maybe approval status, etc. I wasn't too sure how to go about managing the database of valid properties, I thought of putting them in a separate PropNames table but it gets complicated to manage all the different data types and people say that you shouldn't put CLR type names into a SQL database, so I decided to use XML instead, here is the schema: <VacationProperties> <PropertyNames>EmpID,EmpName,Reason,BeginDate,EndDate,Destination</PropertyNames> <PropertyTypes>System.Int32,System.String,System.String,System.DateTime,System.DateTime,System.String</PropertyTypes> <PropertiesRequired>true,true,false,true,true,false</PropertiesRequired> </VacationProperties> I might need more fields than that, I'm not completely sure. I'm parsing the XML like this (would like some feedback on the parsing code): string xml = File.ReadAllText("properties.xml"); Match m = Regex.Match(xml, "<(PropertyNames)>(.*?)</PropertyNames>"; string[] pn = m.Value.Split(','); // do the same for PropertyTypes, PropertiesRequired Then I use the following code to persist configuration changes to the database: string sql = "DROP TABLE VacationProperties"; sql = sql + " CREATE TABLE VacationProperties "; sql = sql + "(PropertyName varchar(100), PropertyType varchar(100) "; sql = sql + "IsRequired varchar(100))"; for (int i = 0; i < pn.Length; i++) { sql = sql + " INSERT VacationProperties VALUES (" + pn[i] + "," + pt[i] + "," + pv[i] + ")"; } // GlobalConnection is a singleton new SqlCommand(sql, GlobalConnection.Instance).ExecuteReader(); So far so good, but after a few days of this I then realized that a lot of this was just a more specific kind of a generic workflow which could be further abstracted, and instead of writing all of this boilerplate plumbing code I could just come up with a workflow and plug it into a workflow engine like Windows Workflow Foundation and have the users configure it: In order to support routing these configurations throw the workflow system, it seemed natural to implement generic XML Web Services for this instead of just using an XML file as above. I've used this code to implement the Web Services: public class VacationConfigurationService : WebService { [WebMethod] public void UpdateConfiguration(string xml) { // Above code goes here } } Which was pretty easy, although I'm still working on a way to validate that XML against some kind of schema as there's no error-checking yet. I also created a few different services for other operations like VacationSubmissionService, VacationReportService, VacationDataService, VacationAuthenticationService, etc. The whole Service Oriented Architecture looks like this: And because the workflow itself might change, I have been working on a way to integrate the WF workflow system with MS Visio, which everybody at the office already knows how to use so they could make changes pretty easily. We have a diagram that looks like the following (it's kind of hard to read but the main items are Activities, Authenticators, Validators, Transformers, Processors, and Data Connections, they're all analogous to the services in the SOA diagram above). The requirements for this system are: (Note - I don't control these, they were given to me by management) Main workflow must interface with Excel spreadsheet, probably through VBA macros (to ease the transition to the new system) Alerts should integrate with MS Outlook, Lotus Notes, and SMS (text messages). We also want to interface it with the company Voice Mail system but that is not a "hard" requirement. Performance requirements: Must handle 250,000 Transactions Per Second Should be able to handle up to 20,000 employees (right now we have 3) 99.99% uptime ("four nines") expected Must be secure against outside hacking, but users cannot be required to enter a username/password. Platforms: Must support Windows XP/Vista/7, Linux, iPhone, Blackberry, DOS 2.0, VAX, IRIX, PDP-11, Apple IIc. Time to complete: 6 to 8 weeks. My questions are: Is this a good design for the system so far? Am I using all of the recommended best practices for these technologies? How do I integrate the Visio diagram above with the Windows Workflow Foundation to call the ConfigurationService and persist workflow changes? Am I missing any important components? Will this be extensible enough to support any scenario via end-user configuration? Will the system scale to the above performance requirements? Will we need any expensive hardware to run it? Are there any "gotchas" I should know about with respect to cross-platform compatibility? For example would it be difficult to convert this to an iPhone app? How long would you expect this to take? (We've dedicated 1 week for testing so I'm thinking maybe 5 weeks?) Many thanks for your advices, Aaron

    Read the article

  • What’s new in ASP.NET 4.0: Core Features

    - by Rick Strahl
    Microsoft released the .NET Runtime 4.0 and with it comes a brand spanking new version of ASP.NET – version 4.0 – which provides an incremental set of improvements to an already powerful platform. .NET 4.0 is a full release of the .NET Framework, unlike version 3.5, which was merely a set of library updates on top of the .NET Framework version 2.0. Because of this full framework revision, there has been a welcome bit of consolidation of assemblies and configuration settings. The full runtime version change to 4.0 also means that you have to explicitly pick version 4.0 of the runtime when you create a new Application Pool in IIS, unlike .NET 3.5, which actually requires version 2.0 of the runtime. In this first of two parts I'll take a look at some of the changes in the core ASP.NET runtime. In the next edition I'll go over improvements in Web Forms and Visual Studio. Core Engine Features Most of the high profile improvements in ASP.NET have to do with Web Forms, but there are a few gems in the core runtime that should make life easier for ASP.NET developers. The following list describes some of the things I've found useful among the new features. Clean web.config Files Are Back! If you've been using ASP.NET 3.5, you probably have noticed that the web.config file has turned into quite a mess of configuration settings between all the custom handler and module mappings for the various web server versions. Part of the reason for this mess is that .NET 3.5 is a collection of add-on components running on top of the .NET Runtime 2.0 and so almost all of the new features of .NET 3.5 where essentially introduced as custom modules and handlers that had to be explicitly configured in the config file. Because the core runtime didn't rev with 3.5, all those configuration options couldn't be moved up to other configuration files in the system chain. With version 4.0 a consolidation was possible, and the result is a much simpler web.config file by default. A default empty ASP.NET 4.0 Web Forms project looks like this: <?xml version="1.0"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> </configuration> Need I say more? Configuration Transformation Files to Manage Configurations and Application Packaging ASP.NET 4.0 introduces the ability to create multi-target configuration files. This means it's possible to create a single configuration file that can be transformed based on relatively simple replacement rules using a Visual Studio and WebDeploy provided XSLT syntax. The idea is that you can create a 'master' configuration file and then create customized versions of this master configuration file by applying some relatively simplistic search and replace, add or remove logic to specific elements and attributes in the original file. To give you an idea, here's the example code that Visual Studio creates for a default web.Release.config file, which replaces a connection string, removes the debug attribute and replaces the CustomErrors section: <?xml version="1.0"?> <configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"> <connectionStrings> <add name="MyDB" connectionString="Data Source=ReleaseSQLServer;Initial Catalog=MyReleaseDB;Integrated Security=True" xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/> </connectionStrings> <system.web> <compilation xdt:Transform="RemoveAttributes(debug)" /> <customErrors defaultRedirect="GenericError.htm" mode="RemoteOnly" xdt:Transform="Replace"> <error statusCode="500" redirect="InternalError.htm"/> </customErrors> </system.web> </configuration> You can see the XSL transform syntax that drives this functionality. Basically, only the elements listed in the override file are matched and updated – all the rest of the original web.config file stays intact. Visual Studio 2010 supports this functionality directly in the project system so it's easy to create and maintain these customized configurations in the project tree. Once you're ready to publish your application, you can then use the Publish <yourWebApplication> option on the Build menu which allows publishing to disk, via FTP or to a Web Server using Web Deploy. You can also create a deployment package as a .zip file which can be used by the WebDeploy tool to configure and install the application. You can manually run the Web Deploy tool or use the IIS Manager to install the package on the server or other machine. You can find out more about WebDeploy and Packaging here: http://tinyurl.com/2anxcje. Improved Routing Routing provides a relatively simple way to create clean URLs with ASP.NET by associating a template URL path and routing it to a specific ASP.NET HttpHandler. Microsoft first introduced routing with ASP.NET MVC and then they integrated routing with a basic implementation in the core ASP.NET engine via a separate ASP.NET routing assembly. In ASP.NET 4.0, the process of using routing functionality gets a bit easier. First, routing is now rolled directly into System.Web, so no extra assembly reference is required in your projects to use routing. The RouteCollection class now includes a MapPageRoute() method that makes it easy to route to any ASP.NET Page requests without first having to implement an IRouteHandler implementation. It would have been nice if this could have been extended to serve *any* handler implementation, but unfortunately for anything but a Page derived handlers you still will have to implement a custom IRouteHandler implementation. ASP.NET Pages now include a RouteData collection that will contain route information. Retrieving route data is now a lot easier by simply using this.RouteData.Values["routeKey"] where the routeKey is the value specified in the route template (i.e., "users/{userId}" would use Values["userId"]). The Page class also has a GetRouteUrl() method that you can use to create URLs with route data values rather than hardcoding the URL: <%= this.GetRouteUrl("users",new { userId="ricks" }) %> You can also use the new Expression syntax using <%$RouteUrl %> to accomplish something similar, which can be easier to embed into Page or MVC View code: <a runat="server" href='<%$RouteUrl:RouteName=user, id=ricks %>'>Visit User</a> Finally, the Response object also includes a new RedirectToRoute() method to build a route url for redirection without hardcoding the URL. Response.RedirectToRoute("users", new { userId = "ricks" }); All of these routines are helpers that have been integrated into the core ASP.NET engine to make it easier to create routes and retrieve route data, which hopefully will result in more people taking advantage of routing in ASP.NET. To find out more about the routing improvements you can check out Dan Maharry's blog which has a couple of nice blog entries on this subject: http://tinyurl.com/37trutj and http://tinyurl.com/39tt5w5. Session State Improvements Session state is an often used and abused feature in ASP.NET and version 4.0 introduces a few enhancements geared towards making session state more efficient and to minimize at least some of the ill effects of overuse. The first improvement affects out of process session state, which is typically used in web farm environments or for sites that store application sensitive data that must survive AppDomain restarts (which in my opinion is just about any application). When using OutOfProc session state, ASP.NET serializes all the data in the session statebag into a blob that gets carried over the network and stored either in the State server or SQL Server via the Session provider. Version 4.0 provides some improvement in this serialization of the session data by offering an enableCompression option on the web.Config <Session> section, which forces the serialized session state to be compressed. Depending on the type of data that is being serialized, this compression can reduce the size of the data travelling over the wire by as much as a third. It works best on string data, but can also reduce the size of binary data. In addition, ASP.NET 4.0 now offers a way to programmatically turn session state on or off as part of the request processing queue. In prior versions, the only way to specify whether session state is available is by implementing a marker interface on the HTTP handler implementation. In ASP.NET 4.0, you can now turn session state on and off programmatically via HttpContext.Current.SetSessionStateBehavior() as part of the ASP.NET module pipeline processing as long as it occurs before the AquireRequestState pipeline event. Output Cache Provider Output caching in ASP.NET has been a very useful but potentially memory intensive feature. The default OutputCache mechanism works through in-memory storage that persists generated output based on various lifetime related parameters. While this works well enough for many intended scenarios, it also can quickly cause runaway memory consumption as the cache fills up and serves many variations of pages on your site. ASP.NET 4.0 introduces a provider model for the OutputCache module so it becomes possible to plug-in custom storage strategies for cached pages. One of the goals also appears to be to consolidate some of the different cache storage mechanisms used in .NET in general to a generic Windows AppFabric framework in the future, so various different mechanisms like OutputCache, the non-Page specific ASP.NET cache and possibly even session state eventually can use the same caching engine for storage of persisted data both in memory and out of process scenarios. For developers, the OutputCache provider feature means that you can now extend caching on your own by implementing a custom Cache provider based on the System.Web.Caching.OutputCacheProvider class. You can find more info on creating an Output Cache provider in Gunnar Peipman's blog at: http://tinyurl.com/2vt6g7l. Response.RedirectPermanent ASP.NET 4.0 includes features to issue a permanent redirect that issues as an HTTP 301 Moved Permanently response rather than the standard 302 Redirect respond. In pre-4.0 versions you had to manually create your permanent redirect by setting the Status and Status code properties – Response.RedirectPermanent() makes this operation more obvious and discoverable. There's also a Response.RedirectToRoutePermanent() which provides permanent redirection of route Urls. Preloading of Applications ASP.NET 4.0 provides a new feature to preload ASP.NET applications on startup, which is meant to provide a more consistent startup experience. If your application has a lengthy startup cycle it can appear very slow to serve data to clients while the application is warming up and loading initial resources. So rather than serve these startup requests slowly in ASP.NET 4.0, you can force the application to initialize itself first before even accepting requests for processing. This feature works only on IIS 7.5 (Windows 7 and Windows Server 2008 R2) and works in combination with IIS. You can set up a worker process in IIS 7.5 to always be running, which starts the Application Pool worker process immediately. ASP.NET 4.0 then allows you to specify site-specific settings by setting the serverAutoStartEnabled on a particular site along with an optional serviceAutoStartProvider class that can be used to receive "startup events" when the application starts up. This event in turn can be used to configure the application and optionally pre-load cache data and other information required by the app on startup.  The configuration settings need to be made in applicationhost.config: <sites> <site name="WebApplication2" id="1"> <application path="/" serviceAutoStartEnabled="true" serviceAutoStartProvider="PreWarmup" /> </site> </sites> <serviceAutoStartProviders> <add name="PreWarmup" type="PreWarmupProvider,MyAssembly" /> </serviceAutoStartProviders> Hooking up a warm up provider is optional so you can omit the provider definition and reference. If you do define it here's what it looks like: public class PreWarmupProvider System.Web.Hosting.IProcessHostPreloadClient { public void Preload(string[] parameters) { // initialization for app } } This code fires and while it's running, ASP.NET/IIS will hold requests from hitting the pipeline. So until this code completes the application will not start taking requests. The idea is that you can perform any pre-loading of resources and cache values so that the first request will be ready to perform at optimal performance level without lag. Runtime Performance Improvements According to Microsoft, there have also been a number of invisible performance improvements in the internals of the ASP.NET runtime that should make ASP.NET 4.0 applications run more efficiently and use less resources. These features come without any change requirements in applications and are virtually transparent, except that you get the benefits by updating to ASP.NET 4.0. Summary The core feature set changes are minimal which continues a tradition of small incremental changes to the ASP.NET runtime. ASP.NET has been proven as a solid platform and I'm actually rather happy to see that most of the effort in this release went into stability, performance and usability improvements rather than a massive amount of new features. The new functionality added in 4.0 is minimal but very useful. A lot of people are still running pure .NET 2.0 applications these days and have stayed off of .NET 3.5 for some time now. I think that version 4.0 with its full .NET runtime rev and assembly and configuration consolidation will make an attractive platform for developers to update to. If you're a Web Forms developer in particular, ASP.NET 4.0 includes a host of new features in the Web Forms engine that are significant enough to warrant a quick move to .NET 4.0. I'll cover those changes in my next column. Until then, I suggest you give ASP.NET 4.0 a spin and see for yourself how the new features can help you out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • How to handle screen orientation change when progress dialog and background thread active?

    - by Heikki Toivonen
    My program does some network activity in a background thread. Before starting, it pops up a progress dialog. The dialog is dismissed on the handler. This all works fine, except when screen orientation changes while the dialog is up (and the background thread is going). At this point the app either crashes, or deadlocks, or gets into a weird stage where the app does not work at all until all the threads have been killed. How can I handle the screen orientation change gracefully? The sample code below matches roughly what my real program does: public class MyAct extends Activity implements Runnable { public ProgressDialog mProgress; // UI has a button that when pressed calls send public void send() { mProgress = ProgressDialog.show(this, "Please wait", "Please wait", true, true); Thread thread = new Thread(this); thread.start(); } public void run() { Thread.sleep(10000); Message msg = new Message(); mHandler.sendMessage(msg); } private final Handler mHandler = new Handler() { @Override public void handleMessage(Message msg) { mProgress.dismiss(); } }; } Stack: E/WindowManager( 244): Activity MyAct has leaked window com.android.internal.policy.impl.PhoneWindow$DecorView@433b7150 that was originally added here E/WindowManager( 244): android.view.WindowLeaked: Activity MyAct has leaked window com.android.internal.policy.impl.PhoneWindow$DecorView@433b7150 that was originally added here E/WindowManager( 244): at android.view.ViewRoot.<init>(ViewRoot.java:178) E/WindowManager( 244): at android.view.WindowManagerImpl.addView(WindowManagerImpl.java:147) E/WindowManager( 244): at android.view.WindowManagerImpl.addView(WindowManagerImpl.java:90) E/WindowManager( 244): at android.view.Window$LocalWindowManager.addView(Window.java:393) E/WindowManager( 244): at android.app.Dialog.show(Dialog.java:212) E/WindowManager( 244): at android.app.ProgressDialog.show(ProgressDialog.java:103) E/WindowManager( 244): at android.app.ProgressDialog.show(ProgressDialog.java:91) E/WindowManager( 244): at MyAct.send(MyAct.java:294) E/WindowManager( 244): at MyAct$4.onClick(MyAct.java:174) E/WindowManager( 244): at android.view.View.performClick(View.java:2129) E/WindowManager( 244): at android.view.View.onTouchEvent(View.java:3543) E/WindowManager( 244): at android.widget.TextView.onTouchEvent(TextView.java:4664) E/WindowManager( 244): at android.view.View.dispatchTouchEvent(View.java:3198) E/WindowManager( 244): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:857) E/WindowManager( 244): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:857) E/WindowManager( 244): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:857) E/WindowManager( 244): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:857) E/WindowManager( 244): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:857) E/WindowManager( 244): at com.android.internal.policy.impl.PhoneWindow$DecorView.superDispatchTouchEvent(PhoneWindow.java:1593) E/WindowManager( 244): at com.android.internal.policy.impl.PhoneWindow.superDispatchTouchEvent(PhoneWindow.java:1089) E/WindowManager( 244): at android.app.Activity.dispatchTouchEvent(Activity.java:1871) E/WindowManager( 244): at com.android.internal.policy.impl.PhoneWindow$DecorView.dispatchTouchEvent(PhoneWindow.java:1577) E/WindowManager( 244): at android.view.ViewRoot.handleMessage(ViewRoot.java:1140) E/WindowManager( 244): at android.os.Handler.dispatchMessage(Handler.java:88) E/WindowManager( 244): at android.os.Looper.loop(Looper.java:123) E/WindowManager( 244): at android.app.ActivityThread.main(ActivityThread.java:3739) E/WindowManager( 244): at java.lang.reflect.Method.invokeNative(Native Method) E/WindowManager( 244): at java.lang.reflect.Method.invoke(Method.java:515) E/WindowManager( 244): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:739) E/WindowManager( 244): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:497) E/WindowManager( 244): at dalvik.system.NativeStart.main(Native Method) and: W/dalvikvm( 244): threadid=3: thread exiting with uncaught exception (group=0x4000fe68) E/AndroidRuntime( 244): Uncaught handler: thread main exiting due to uncaught exception E/AndroidRuntime( 244): java.lang.IllegalArgumentException: View not attached to window manager E/AndroidRuntime( 244): at android.view.WindowManagerImpl.findViewLocked(WindowManagerImpl.java:331) E/AndroidRuntime( 244): at android.view.WindowManagerImpl.removeView(WindowManagerImpl.java:200) E/AndroidRuntime( 244): at android.view.Window$LocalWindowManager.removeView(Window.java:401) E/AndroidRuntime( 244): at android.app.Dialog.dismissDialog(Dialog.java:249) E/AndroidRuntime( 244): at android.app.Dialog.access$000(Dialog.java:59) E/AndroidRuntime( 244): at android.app.Dialog$1.run(Dialog.java:93) E/AndroidRuntime( 244): at android.app.Dialog.dismiss(Dialog.java:233) E/AndroidRuntime( 244): at MyAct$1.handleMessage(MyAct.java:321) E/AndroidRuntime( 244): at android.os.Handler.dispatchMessage(Handler.java:88) E/AndroidRuntime( 244): at android.os.Looper.loop(Looper.java:123) E/AndroidRuntime( 244): at android.app.ActivityThread.main(ActivityThread.java:3739) E/AndroidRuntime( 244): at java.lang.reflect.Method.invokeNative(Native Method) E/AndroidRuntime( 244): at java.lang.reflect.Method.invoke(Method.java:515) E/AndroidRuntime( 244): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:739) E/AndroidRuntime( 244): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:497) E/AndroidRuntime( 244): at dalvik.system.NativeStart.main(Native Method) I/Process ( 46): Sending signal. PID: 244 SIG: 3 I/dalvikvm( 244): threadid=7: reacting to signal 3 I/dalvikvm( 244): Wrote stack trace to '/data/anr/traces.txt' I/Process ( 244): Sending signal. PID: 244 SIG: 9 I/ActivityManager( 46): Process MyAct (pid 244) has died. I have tried to dismiss the progress dialog in onSaveInstanceState, but that just prevents an immediate crash. The background thread is still going, and the UI is in partially drawn state. Need to kill the whole app before it starts working again.

    Read the article

  • Scrum Master Stephen Forte Teaches Agile Development, Silverlight and BI at GIDS 2010

    - by rajesh ahuja
    Great Indian Developer Summit 2010 – Gold Standard for India's Software Developer Ecosystem Bangalore, March 25, 2010: The author of several books on application and database development including Programming SQL Server 2008 and certified Scrum Master Stephen Forte is coming this summer to India's biggest summit for the developer ecosystem - Great Indian Developer Summit. At the summit, Stephen will conduct a workshop guaranteed to give attendees a jump start in taking a certified scrum master exam. Scrum, one of the most popular Agile project management and development methods, which is starting to be adopted at major corporations and on very large projects. After an introduction to the basics of Scrum like project planning and estimation, the Scrum Master, team, product owner and burn down, and of course the daily Scrum, Stephen will show many real world applications of the methodology drawn from his own experience as a Scrum Master. Negotiating with the business, estimation and team dynamics are all discussed as well as how to use Scrum in small organizations, large enterprise environments and consulting environments. Stephen will also discuss using Scrum with virtual teams and an off-shoring environment. He will then take a look at the tools we will use for Agile development, including planning poker, unit testing, and much more. On 20th April at the GIDS.NET Conference, Stephen will also conduct a series of sessions on Microsoft computing technologies. He will teach how to build data driven, n-tier Rich Internet Applications (RIA) with Silverlight 4.0. Line of business applications (LOB) in Silverlight 4.0 are easy by tapping the power of WCF RIA Services, the Silverlight Toolkit, and elevated out of browser support. Stephen's demo centric session will walk you through an example of building a LOB application with Silverlight 4.0. See how Silverlight and WCF RIA Services support domain logic, services, data binding, validation, server based paging, authentication, authorization and much more. Silverlight 4.0 means business. Silverlight runs C# and Visual Basic code, and so it seems natural that a business application might share some code between the Silverlight client and its ASP.NET Web server. You may want to run some code client-side for interactivity, but re-run that code on the server for security or reliability. This is possible, and there are several techniques you can use to accomplish this goal. In Stephen's second talk learn about the various techniques and their pros and cons. Some techniques work better in C#, others in VB. Still others are simpler with a little extra tooling or code-generation. Any serious Silverlight business application will almost certainly face this issue, and this session gets you going fast. In the third talk, Stephen will explain how to properly architect and deploy a BI application using a mix of some exciting new tools and some old familiar ones. He will start with a traditional relational transaction centric database (OLTP) and explore ways to build a data warehouse (OLAP), looking at the star and snowflake schemas. Next he will look at the process of extraction, transformation, and loading (ETL) your OLTP data into your data warehouse. Different techniques for ETL will be described and the various tradeoffs will be discussed. Then he will look at using the warehouse for reporting, drill down, and data analysis in Microsoft Excel's PowerPivot 2010. The session will round off by showing how to properly build a cube and build a data analysis application on top of that cube, and conclude by looking at some tools to help with the data visualization process. Every year, GIDS is a game changer for several thousands of IT professionals, providing them with a competitive edge over their peers, enlightening them with bleeding-edge information most useful in their daily jobs, helping them network with world-class experts and visionaries, and providing them with a much needed thrust in their careers. Attend Great Indian Developer Summit to gain the information, education and solutions you seek. From post-conference workshops, breakout sessions by expert instructors, keynotes by industry heavyweights, enhanced networking opportunities, and more. About Great Indian Developer Summit Great Indian Developer Summit is the gold standard for India's software developer ecosystem for gaining exposure to and evaluating new projects, tools, services, platforms, languages, software and standards. Packed with premium knowledge, action plans and advise from been-there-done-it veterans, creators, and visionaries, the 2010 edition of Great Indian Developer Summit features focused sessions, case studies, workshops and power panels that will transform you into a force to reckon with. Featuring 3 co-located conferences: GIDS.NET, GIDS.Web, GIDS.Java and an exclusive day of in-depth tutorials - GIDS.Workshops, from 20 April to 24 April at the IISc campus in Bangalore. At GIDS you'll participate in hundreds of sessions encompassing the full range of Microsoft computing, Java, Agile, RIA, Rich Web, open source/standards, languages, frameworks and platforms, practical tutorials that deep dive into technical skill and best practices, inspirational keynote presentations, an Expo Hall featuring dozens of the latest projects and products activities, engaging networking events, and the interact with the best and brightest of speakers from around the world. For further information on GIDS 2010, please visit the summit on the web http://www.developersummit.com/ A Saltmarch Media Press Release E: [email protected] Ph: +91 80 4005 1000

    Read the article

  • Jolicloud is a Nifty New OS for Your Netbook

    - by Matthew Guay
    Want to breathe new life into your netbook?  Here’s a quick look at Jolicloud, a unique new Linux based OS that lets you use your netbook in a whole new way. Netbooks have been an interesting category of computers.  When they were first released, most netbooks came with a stripped down Linux based operating system designed to let you easily access the internet first and foremost.  Consumers wanted more from their netbooks, so full OSes such as Windows XP and Ubuntu became the standard on netbooks.  Microsoft worked hard to get Windows 7 working great on netbooks, and today most netbooks run Windows 7 great.  But the Linux community hasn’t stood still either, and Jolicloud is proof of that.  Jolicloud is a unique OS designed to bring the best of both webapps and standard programs to your netbook.   Keep reading to see if this is the perfect netbook OS for you. Getting Started Installing Jolicloud on your netbook is easy thanks to a the Jolicloud Express installer for Windows.  Since many netbooks run Windows by default, this makes it easy to install Jolicloud.  Plus, your Windows install is left untouched, so you can still easily access all your Windows files and programs. Download and run the roughly 700Mb installer (link below) just as a normal installer in Windows. This will first extract the needed files. Click Get started to install Jolicloud on your netbook. Enter a username, password, and nickname for your computer.  Please note that the username must be all lowercase, and the nickname should not contain spaces or special characters.   Now you can review the default installation settings.  By default it will take up 39Gb and install on your C:\ drive in English.  If you wish to change this, click Change. We chose to install it on the D: drive on this netbook, as its harddrive was already partitioned into two parts.  Click Save when your settings are all correct, and then click Next in the previous window. Jolicloud will prepare for the installation.  This took about 5 minutes in our test.  Click Next when this is finished. Click Restart now to install and run Jolicloud. When your netbook reboots, it will initialize the Jolicloud setup. It will then automatically finish the installation.  Just sit back and wait; there’s nothing for you to do right now.  The installation took about 20 minutes in our test. Jolicloud will automatically reboot when the setup is finished. Once it’s rebooted, you’re ready to go!  Enter the username, then the password, that you chose earlier when you were installing Jolicloud from Windows. Welcome to your Jolicloud desktop! Hardware Support We installed Jolicloud on a Samsung N150 netbook with an Atom N450 processor, 1Gb Ram, 250Gb harddrive, and WiFi b/g/n with Bluetooth.  Amazingly, once Jolicloud was installed, everything was ready to use.  No drivers to install, no settings to hassle with, it was all installed and set up perfectly.  Power settings worked great, and closing the netbook put it to sleep just like in Windows. WiFi drivers have typically been difficult to find and install on Linux, but Jolicloud had our netbook’s wifi working immediately.  To get online, simply click the Wireless icon on the top right, and select the wireless network you want to connect to. Jolicloud will let you know when it is signed on. Wired Lan networking was also seamless; simply connect your cable and you’re ready to go.  The webcam and touchpad also worked perfectly directly.  The only thing missing was multitouch; this touchpad has two finger scroll, pinch zoom, and other nice multitouch features in Windows, but in Julicloud it only functioned as a standard touchpad.  It did have tap to click activated by default, as well as right-side scrolling, which is nice. Jolicloud also supported our video card without any extra work.  The native resolution was already selected, and the only problem we had with the screen was that there was no apparent way to change the brightness.  This is not a major problem, but would be nice to have.  The Samsung N150 has Intel GMA3150 integrated graphics, and Jolicloud promises 1080p HD video on it.  It did playback 720p H.264 video flawlessly without installing anything extra, but it stuttered on full 1080p HD (which is the exact same as this netbook’s video playback in Windows 7 – 720p works great, but it stutters on 1080p).  We would be excited to see full HD on this netbook, but 720p is definitely fine for most stuff.   Jolicloud supports a wide range of netbooks, and based on our experience we would expect it to work as good on any supported hardware.  Check out the list of supported netbooks to see if your netbook is supported; if not, it still may work but you may have to install special drivers. Jolicloud’s performance was very similar to Windows 7 on our netbook.  It boots in about 30 seconds, and apps load fairly quickly.  In general, we couldn’t tell much difference in performance between Jolicloud and Windows 7, though this isn’t a problem since Windows 7 runs great on the current generation of netbooks. Using Jolicloud Ready to start putting Jolicloud to use?  Your fresh Jolicloud install you can run several built-in apps, such as Firefox, a calculator, and the chat client Pidgin.  It also has a media player and file viewer installed, so you can play MP3s or MPG videos, or read PDF ebooks without installing anything extra.  It also has Flash player installed so you can watch videos online easily. You can also directly access all of your files from the right side of your home screen.  You can even access your Windows files; in our test, the 116.9 GB Media was C: from Windows.  Select it to browse and open any file you had saved in Windows. You may need to enter your password to access it. Once you’re authenticated it, you’ll see all of your Windows files and folders.  Your User files (Documents, Music, Videos, etc.) will be in the Users folder. And, you can easily add files from removable media such as USB flash drives and memory cards.  Jolicloud recognized a flash drive we tested with no trouble at all. Add new apps But, the best part about Jolicloud is that it makes it very easy to install new apps.  Click the Get Started button on your homescreen. You’ll first need to create an account.  You can then use this same account on another netbook if you wish, and your settings will automatically be synced between the two. You can either signup using your Facebook account, …or you can sign up the traditional way with your email address, name, and password.  If you sign up this way, you will need to confirm your email address before your account will be finished. Now, choose your netbook model from the list, and enter a name for your computer. And that’s it!  You’ll now see the Jolicloud dashboard, which will show you updates and notifications from friends who also use Jolicloud. Click the App directory to find new apps for your netbook.  Here you will find a variety of webapps, such as Gmail, along with native applications, such as Skype, that you can install on your netbook.  Simply click the Install button on the right to add the app to your netbook. You will be prompted to enter your system password, and then the app will install without any further input.   Once an app is installed, a check mark will appear beside its name.  You can remove it by clicking the Remove button, and it will uninstall seamlessly. Webapps, such as Gmail, actually run in in a Chrome-powered window that lets the webapp run full screen.  This gives the webapps a native feel, but actually they’re just running the same as they would in a standard web browser.   The Jolicloud Interface Most apps run maximized, and there is no way to run them smaller.  This in general works good, since with small screens most apps need to run full-screen anyhow. Smaller apps, such as a calculator or the Pidgin chat client, run in a window just like they do on other operating systems. You can switch to another app that’s running by selecting it’s icon on the top left, or you can go back to the home screen by clicking the home screen.  If you’re finished with an program, simply click the red X button on the top right of the window when you’re running it. Or, you can switch between programs using standard keyboard shortcuts such as Alt-tab. The default page on the home screen is the favorites page, and all of your other programs are orginized in their own sections on the left hand side.  But, if you want to add one of these to your favorites page, simply right-click on it and select Add to Favorites. When you’re done for the day, you can simply close your netbook to put it to sleep.  Or, if you want to shut down, just press the Quit button on the bottom right of the home screen and then select Shut Down. Booting Jolicloud When you install Jolicloud, it will set itself as the default operating system.  Now, when you boot your netbook, it will show you a list of installed operating systems.  You can select either Windows or Jolicloud, but if you don’t make a selection it will boot into Jolicloud after waiting 10 seconds. If you’d perfer to boot into Windows by default, you can easily change this.  First, boot your netbook in to Windows.  Open the start menu, right-click on the Computer button, and select Properties.   Click the “Advanced system settings” link on the left side. Click the Settings button in the Startup and Recovery section. Now, select Windows as the default operating system, and click Ok.  Your netbook will now boot into Windows by default, but will give you 10 seconds to choose to boot into Jolicloud when you start your computer. Or, if you decided you don’t want Jolicloud, you can easily uninstall it from within Windows. Please note that this will also remove any files you may have saved in Jolicloud, so be sure to copy them to your Windows drive before uninstalling. To uninstall Jolicloud from within Windows, open Control Panel, and select Uninstall a Program. Scroll down to select Jolicloud, and click Uninstall/Change. Click Yes to confirm that you want to uninstall Jolicloud. After a few moments, it will let you know that Jolicloud has been uninstalled.  You’re netbook is now back the same as it was before you installed Jolicloud, with only Windows installed. Closing Whether you’re wanting to replace your current OS on your netbook or would simply like to try out a fresh new Linux version on your netbook, Jolicloud is a great option for you.  We were very impressed by it’s solid hardware support and the ease of installing new apps in Jolicloud.  Rather than simply giving us a standard OS, Jolicloud offers a unique way to use your netbook with native programs and webapps.  And whether you’re an IT pro or are a new computer user, Jolicloud was easy enough to use that anyone can do it.  Give it a try, and let us know what your favorite netbook OS is! Link Download Jolicloud for your netbook Similar Articles Productive Geek Tips How To Change XSplash Themes in Ubuntu 9.10Verify the Integrity of Windows Vista System FilesMonitor Multiple Logs in a Single Shell with MultiTail for LinuxHide Some or All of the GUI Bars in FirefoxAsk the Readers: Do You Use a Laptop, Desktop, or Both? TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Stop In The Name Of Love (Firefox addon) Chitika iPad Labs Gives Live iPad Sale Stats Heaven & Hell Finder Icon Using TrueCrypt to Secure Your Data Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically

    Read the article

  • Coding With Windows Azure IaaS

    - by Hisham El-bereky
    This post will focus on some advanced programming topics concerned with IaaS (Infrastructure as a Service) which provided as windows azure virtual machine (with its related resources like virtual disk and virtual network), you know that windows azure started as PaaS cloud platform but regarding to some business cases which need to have full control over their virtual machine, so windows azure directed toward providing IaaS. Sometimes you will need to manage your cloud IaaS through code may be for these reasons: Working on hyper-cloud system by providing bursting connector to windows azure virtual machines Providing multi-tenant system which consume windows azure virtual machine Automated process on your on-premises or cloud service which need to utilize some virtual resources We are going to implement the following basic operation using C# code: List images Create virtual machine List virtual machines Restart virtual machine Delete virtual machine Before going to implement the above operations we need to prepare client side and windows azure subscription to communicate correctly by providing management certificate (x.509 v3 certificates) which permit client access to resources in your Windows Azure subscription, whilst requests made using the Windows Azure Service Management REST API require authentication against a certificate that you provide to Windows Azure More info about setting management certificate located here. And to install .cer on other client machine you will need the .pfx file, or if not exist by exporting .cer as .pfx Note: You will need to install .net 4.5 on your machine to try the code So let start This post built on the post sent by Michael Washam "Advanced Windows Azure IaaS – Demo Code", so I'm here to declare some points and to add new operation which is not exist in Michael's demo The basic C# class object used here as client to azure REST API for IaaS service is HttpClient (Provides a base class for sending HTTP requests and receiving HTTP responses from a resource identified by a URI) this object must be initialized with the required data like certificate, headers and content if required. Also I'd like to refer here that the code is based on using Asynchronous programming with calls to azure which enhance the performance and gives us the ability to work with complex calls which depends on more than one sub-call to achieve some operation The following code explain how to get certificate and initializing HttpClient object with required data like headers and content HttpClient GetHttpClient() { X509Store certificateStore = null; X509Certificate2 certificate = null; try { certificateStore = new X509Store(StoreName.My, StoreLocation.CurrentUser); certificateStore.Open(OpenFlags.ReadOnly); string thumbprint = ConfigurationManager.AppSettings["CertThumbprint"]; var certificates = certificateStore.Certificates.Find(X509FindType.FindByThumbprint, thumbprint, false); if (certificates.Count > 0) { certificate = certificates[0]; } } finally { if (certificateStore != null) certificateStore.Close(); }   WebRequestHandler handler = new WebRequestHandler(); if (certificate!= null) { handler.ClientCertificates.Add(certificate); HttpClient httpClient = new HttpClient(handler); //And to set required headers lik x-ms-version httpClient.DefaultRequestHeaders.Add("x-ms-version", "2012-03-01"); httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/xml")); return httpClient; } return null; }  Let us keep the object httpClient as reference object used to call windows azure REST API IaaS service. For each request operation we need to define: Request URI HTTP Method Headers Content body (1) List images The List OS Images operation retrieves a list of the OS images from the image repository Request URI https://management.core.windows.net/<subscription-id>/services/images] Replace <subscription-id> with your windows Id HTTP Method GET (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None.  C# Code List<String> imageList = new List<String>(); //replace _subscriptionid with your WA subscription String uri = String.Format("https://management.core.windows.net/{0}/services/images", _subscriptionid);  HttpClient http = GetHttpClient(); Stream responseStream = await http.GetStreamAsync(uri);  if (responseStream != null) {      XDocument xml = XDocument.Load(responseStream);      var images = xml.Root.Descendants(ns + "OSImage").Where(i => i.Element(ns + "OS").Value == "Windows");      foreach (var image in images)      {      string img = image.Element(ns + "Name").Value;      imageList.Add(img);      } } More information about the REST call (Request/Response) located here on this link http://msdn.microsoft.com/en-us/library/windowsazure/jj157191.aspx (2) Create Virtual Machine Creating virtual machine required service and deployment to be created first, so creating VM should be done through three steps incase hosted service and deployment is not created yet Create hosted service, a container for service deployments in Windows Azure. A subscription may have zero or more hosted services Create deployment, a service that is running on Windows Azure. A deployment may be running in either the staging or production deployment environment. It may be managed either by referencing its deployment ID, or by referencing the deployment environment in which it's running. Create virtual machine, the previous two steps info required here in this step I suggest here to use the same name for service, deployment and service to make it easy to manage virtual machines Note: A name for the hosted service that is unique within Windows Azure. This name is the DNS prefix name and can be used to access the hosted service. For example: http://ServiceName.cloudapp.net// 2.1 Create service Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices HTTP Method POST (HTTP 1.1) Header x-ms-version: 2012-03-01 Content-Type: application/xml Body More details about request body (and other information) are located here http://msdn.microsoft.com/en-us/library/windowsazure/gg441304.aspx C# code The following method show how to create hosted service async public Task<String> NewAzureCloudService(String ServiceName, String Location, String AffinityGroup, String subscriptionid) { String requestID = String.Empty;   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices", subscriptionid); HttpClient http = GetHttpClient();   System.Text.ASCIIEncoding ae = new System.Text.ASCIIEncoding(); byte[] svcNameBytes = ae.GetBytes(ServiceName);   String locationEl = String.Empty; String locationVal = String.Empty;   if (String.IsNullOrEmpty(Location) == false) { locationEl = "Location"; locationVal = Location; } else { locationEl = "AffinityGroup"; locationVal = AffinityGroup; }   XElement srcTree = new XElement("CreateHostedService", new XAttribute(XNamespace.Xmlns + "i", ns1), new XElement("ServiceName", ServiceName), new XElement("Label", Convert.ToBase64String(svcNameBytes)), new XElement(locationEl, locationVal) ); ApplyNamespace(srcTree, ns);   XDocument CSXML = new XDocument(srcTree); HttpContent content = new StringContent(CSXML.ToString()); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml");   HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); } return requestID; } 2.2 Create Deployment Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>/deploymentslots/<deployment-slot-name> <deployment-slot-name> with staging or production, depending on where you wish to deploy your service package <service-name> provided as input from the previous step HTTP Method POST (HTTP 1.1) Header x-ms-version: 2012-03-01 Content-Type: application/xml Body More details about request body (and other information) are located here http://msdn.microsoft.com/en-us/library/windowsazure/ee460813.aspx C# code The following method show how to create hosted service deployment async public Task<String> NewAzureVMDeployment(String ServiceName, String VMName, String VNETName, XDocument VMXML, XDocument DNSXML) { String requestID = String.Empty;     String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments", _subscriptionid, ServiceName); HttpClient http = GetHttpClient(); XElement srcTree = new XElement("Deployment", new XAttribute(XNamespace.Xmlns + "i", ns1), new XElement("Name", ServiceName), new XElement("DeploymentSlot", "Production"), new XElement("Label", ServiceName), new XElement("RoleList", null) );   if (String.IsNullOrEmpty(VNETName) == false) { srcTree.Add(new XElement("VirtualNetworkName", VNETName)); }   if(DNSXML != null) { srcTree.Add(new XElement("DNS", new XElement("DNSServers", DNSXML))); }   XDocument deploymentXML = new XDocument(srcTree); ApplyNamespace(srcTree, ns);   deploymentXML.Descendants(ns + "RoleList").FirstOrDefault().Add(VMXML.Root);     String fixedXML = deploymentXML.ToString().Replace(" xmlns=\"\"", ""); HttpContent content = new StringContent(fixedXML); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml");   HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); }   return requestID; } 2.3 Create Virtual Machine Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<cloudservice-name>/deployments/<deployment-name>/roles <cloudservice-name> and <deployment-name> are provided as input from the previous steps Http Method POST (HTTP 1.1) Header x-ms-version: 2012-03-01 Content-Type: application/xml Body More details about request body (and other information) located here http://msdn.microsoft.com/en-us/library/windowsazure/jj157186.aspx C# code async public Task<String> NewAzureVM(String ServiceName, String VMName, XDocument VMXML) { String requestID = String.Empty;   String deployment = await GetAzureDeploymentName(ServiceName);   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}/roles", _subscriptionid, ServiceName, deployment);   HttpClient http = GetHttpClient(); HttpContent content = new StringContent(VMXML.ToString()); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml"); HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); } return requestID; } (3) List Virtual Machines To list virtual machine hosted on windows azure subscription we have to loop over all hosted services to get its hosted virtual machines To do that we need to execute the following operations: listing hosted services listing hosted service Virtual machine 3.1 Listing Hosted Services Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices HTTP Method GET (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. More info about this HTTP request located here on this link http://msdn.microsoft.com/en-us/library/windowsazure/ee460781.aspx C# Code async private Task<List<XDocument>> GetAzureServices(String subscriptionid) { String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices ", subscriptionid); List<XDocument> services = new List<XDocument>();   HttpClient http = GetHttpClient();   Stream responseStream = await http.GetStreamAsync(uri);   if (responseStream != null) { XDocument xml = XDocument.Load(responseStream); var svcs = xml.Root.Descendants(ns + "HostedService"); foreach (XElement r in svcs) { XDocument vm = new XDocument(r); services.Add(vm); } }   return services; }  3.2 Listing Hosted Service Virtual Machines Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>/deployments/<deployment-name>/roles/<role-name> HTTP Method GET (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. More info about this HTTP request here http://msdn.microsoft.com/en-us/library/windowsazure/jj157193.aspx C# Code async public Task<XDocument> GetAzureVM(String ServiceName, String VMName, String subscriptionid) { String deployment = await GetAzureDeploymentName(ServiceName); XDocument vmXML = new XDocument();   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}/roles/{3}", subscriptionid, ServiceName, deployment, VMName);   HttpClient http = GetHttpClient(); Stream responseStream = await http.GetStreamAsync(uri); if (responseStream != null) { vmXML = XDocument.Load(responseStream); }   return vmXML; }  So the final method which can be used to list all virtual machines is: async public Task<XDocument> GetAzureVMs() { List<XDocument> services = await GetAzureServices(); XDocument vms = new XDocument(); vms.Add(new XElement("VirtualMachines")); ApplyNamespace(vms.Root, ns); foreach (var svc in services) { string ServiceName = svc.Root.Element(ns + "ServiceName").Value;   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deploymentslots/{2}", _subscriptionid, ServiceName, "Production");   try { HttpClient http = GetHttpClient(); Stream responseStream = await http.GetStreamAsync(uri);   if (responseStream != null) { XDocument xml = XDocument.Load(responseStream); var roles = xml.Root.Descendants(ns + "RoleInstance"); foreach (XElement r in roles) { XElement svcnameel = new XElement("ServiceName", ServiceName); ApplyNamespace(svcnameel, ns); r.Add(svcnameel); // not part of the roleinstance vms.Root.Add(r); } } } catch (HttpRequestException http) { // no vms with cloud service } } return vms; }  (4) Restart Virtual Machine Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>/deployments/<deployment-name>/roles/<role-name>/Operations HTTP Method POST (HTTP 1.1) Headers x-ms-version: 2012-03-01 Content-Type: application/xml Body <RestartRoleOperation xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <OperationType>RestartRoleOperation</OperationType> </RestartRoleOperation>  More details about this http request here http://msdn.microsoft.com/en-us/library/windowsazure/jj157197.aspx  C# Code async public Task<String> RebootVM(String ServiceName, String RoleName) { String requestID = String.Empty;   String deployment = await GetAzureDeploymentName(ServiceName); String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}/roleInstances/{3}/Operations", _subscriptionid, ServiceName, deployment, RoleName);   HttpClient http = GetHttpClient();   XElement srcTree = new XElement("RestartRoleOperation", new XAttribute(XNamespace.Xmlns + "i", ns1), new XElement("OperationType", "RestartRoleOperation") ); ApplyNamespace(srcTree, ns);   XDocument CSXML = new XDocument(srcTree); HttpContent content = new StringContent(CSXML.ToString()); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml");   HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); } return requestID; }  (5) Delete Virtual Machine You can delete your hosted virtual machine by deleting its deployment, but I prefer to delete its hosted service also, so you can easily manage your virtual machines from code 5.1 Delete Deployment Request URI https://management.core.windows.net/< subscription-id >/services/hostedservices/< service-name >/deployments/<Deployment-Name> HTTP Method DELETE (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. C# code async public Task<HttpResponseMessage> DeleteDeployment( string deploymentName) { string xml = string.Empty; String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}", _subscriptionid, deploymentName, deploymentName); HttpClient http = GetHttpClient(); HttpResponseMessage responseMessage = await http.DeleteAsync(uri); return responseMessage; }  5.2 Delete Hosted Service Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name> HTTP Method DELETE (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. C# code async public Task<HttpResponseMessage> DeleteService(string serviceName) { string xml = string.Empty; String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}", _subscriptionid, serviceName); Log.Info("Windows Azure URI (http DELETE verb): " + uri, typeof(VMManager)); HttpClient http = GetHttpClient(); HttpResponseMessage responseMessage = await http.DeleteAsync(uri); return responseMessage; }  And the following is the method which can used to delete both of deployment and service async public Task<string> DeleteVM(string vmName) { string responseString = string.Empty;   // as a convention here in this post, a unified name used for service, deployment and VM instance to make it easy to manage VMs HttpClient http = GetHttpClient(); HttpResponseMessage responseMessage = await DeleteDeployment(vmName);   if (responseMessage != null) {   string requestID = responseMessage.Headers.GetValues("x-ms-request-id").FirstOrDefault(); OperationResult result = await PollGetOperationStatus(requestID, 5, 120); if (result.Status == OperationStatus.Succeeded) { responseString = result.Message; HttpResponseMessage sResponseMessage = await DeleteService(vmName); if (sResponseMessage != null) { OperationResult sResult = await PollGetOperationStatus(requestID, 5, 120); responseString += sResult.Message; } } else { responseString = result.Message; } } return responseString; }  Note: This article is subject to be updated Hisham  References Advanced Windows Azure IaaS – Demo Code Windows Azure Service Management REST API Reference Introduction to the Azure Platform Representational state transfer Asynchronous Programming with Async and Await (C# and Visual Basic) HttpClient Class

    Read the article

< Previous Page | 760 761 762 763 764 765 766 767 768 769 770 771  | Next Page >