Search Results

Search found 11836 results on 474 pages for 'cloud dev'.

Page 454/474 | < Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >

  • java web application best practices

    - by Bruce
    Hi all I'm trying to figure out the optimum way to develop and release a fairly simple web application, and I'm running into several problems. I'll outline the decisions I've made, because somewhere I've clearly gone off the rails.. Hugely grateful for any help! I have what I think is a fairly simple web application. It contains a couple of jsps that reference a couple of java beans, and the usual static html, js, css and images. Decision 1) I wanted to have a clear and clean release procedure, such that I could develop on my local machine and then release reliably to a production machine. I therefore made the decision to package the application into a war file (including all the static resources), to minimize the separate bits and pieces I would need to release. So far so good? Decision 2) I wanted things on my local machine to be as similar as possible to the production environment. So in my html, for example, I may have a reference to a static file such as http://static.foo.com/file . To keep this code working seamlessly on dev and prod, I decided to put static.foo.com in my /etc/hosts when developing locally, so that all the urls work correctly without changing anything. Decision 3) I decided to use eclipse and maven to give me a best practice environment for administering and building my project. So I have a nice tight set up now, except that: Every time I want to change anything in development, like one line in an html file, I have to rebuild the entire project and then wait for tomcat to load the war before I can see if it's what I wanted. So my questions are: 1) Is there a way to connect up eclipse and tomcat so that I don't have to rebuild the war each time? ie tomcat is looking straight at my actual workspace to serve up the static files? 2)I think I'm maybe making things harder by using /etc/hosts to reflect production urls - is there a better way that doesn't involve manually changing over urls (relative urls are fine of course, but where you have many subdomains, say one for static files and one for dynamic, you have to write out the full path, surely?) 3) Is this really best practice?? How do people set things up so that they balance the requirement for an automated, all-encompassing build process on the one hand, and the speed and flexibility to be able to develop javascript and html and css quickly, as quickly as if one just pointed apache at the directory and developed live? What do people find works? Many thanks!

    Read the article

  • iPhone Debugger Message -- Weird

    - by Bill Shiff
    Hello, I have an iPhone app that I've been working on and have recently upgraded my version of XCode. Since the upgrade, I can build and debug in the iPhone Simulator just fine, but when I try to debug on an attached device I get the following messages: From Xcode4: GNU gdb 6.3.50-20050815 (Apple version gdb-1510) (Fri Oct 22 04:12:10 UTC 2010) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "--host=i386-apple-darwin --target=arm-apple-darwin".tty /dev/ttys001 sharedlibrary apply-load-rules all warning: Unable to read symbols from "dyld" (prefix __dyld_) (not yet mapped into memory). warning: Unable to read symbols for (null)/Library/Frameworks/MessageUI.framework/MessageUI (file not found). warning: Unable to read symbols from "MessageUI" (not yet mapped into memory). warning: Unable to read symbols for (null)/Library/Frameworks/MapKit.framework/MapKit (file not found). warning: Unable to read symbols from "MapKit" (not yet mapped into memory). warning: Unable to read symbols from "Foundation" (not yet mapped into memory). warning: Unable to read symbols for (null)/Library/Frameworks/UIKit.framework/UIKit (file not found). warning: Unable to read symbols from "UIKit" (not yet mapped into memory). warning: Unable to read symbols for (null)/Library/Frameworks/CoreGraphics.framework/CoreGraphics (file not found). warning: Unable to read symbols from "CoreGraphics" (not yet mapped into memory). warning: Unable to read symbols from "CoreData" (not yet mapped into memory). warning: Unable to read symbols from "QuartzCore" (not yet mapped into memory). warning: Unable to read symbols from "libgcc_s.1.dylib" (not yet mapped into memory). warning: Unable to read symbols from "libSystem.B.dylib" (not yet mapped into memory). warning: Unable to read symbols from "libobjc.A.dylib" (not yet mapped into memory). warning: Unable to read symbols from "CoreFoundation" (not yet mapped into memory). target remote-mobile /tmp/.XcodeGDBRemote-3836-28 Switching to remote-macosx protocol mem 0x1000 0x3fffffff cache mem 0x40000000 0xffffffff none mem 0x00000000 0x0fff none [Switching to thread 11523] [Switching to thread 11523] gdb stack crawl at point of internal error: 0 gdb-arm-apple-darwin 0x0013216e internal_vproblem + 316

    Read the article

  • Thinking about introducing PHP/MySQL into a .NET/SQL Server environment. Thoughts?

    - by abszero
    I posted this over at reddit but it didn't gain any momentum. So here is what is going on: our company was recently purchased by another web shop and I was promoted to head of development here in our office. Our office is completely .NET/SQL Server and the company who purchased us is a *nix/PHP/MySQL shop. Now several of our large clients who are on the .NET platform are up for complete rewrites (the sites are from '04 and are running on the 1.x framework.) While reviewing the proposal for one client with my superior I came across a pretty extensive module which would require several hundred man hours to complete and voiced some concern about it in relation to the quote. One of the guys from the PHP group happen to hear this and told me of a module that they (PHP Group) use in Drupal that does exactly what the proposal in front of me was describing and it only took, at most, 8 hours to completely setup / configure. My superior suggested that I take a look at Drupal and the module in question over the weekend but stressed that we should only go that route if it really made sense. So this weekend I spun up a CentOS instance in VirtualBox and started playing around with Drupal. I am still fleshing it out so don't have a solid opinion on it just yet. Anyway I have some questions / fears that I was hoping progit could help me out in! Has anyone had experience doing this and, if so, how did it turn out? I am completely ignorant to what IDE's (if any) are available to for PHP. The last time I worked with PHP it was in Notepad and that was less than intuitive. So is there are more intuitive IDE out there for PHP dev? I don't want to scare my .NET guys. Since the merger all of our new business clients that have had relatively small websites have gone on Drupal with the larger sites going on .NET. My concern is that if they see a large site go onto Drupal that they might start getting anxious and start handing out their resumes. For the foreseeable future there are no plans to liquidate the .NET platform and really we can't just from a support standpoint. What would be the best way to approach this? Any other helpful info? Thanks!

    Read the article

  • Can't get jQuery AutoComplete to work with External JSON

    - by rockinthesixstring
    I'm working on an ASP.NET app where I'm in need of jQuery AutoComplete. Currently there is nothing happening when I type data into the txt63 input box (and before you flame me for using a name like txt63, I know, I know... but it's not my call :D ). Here's my javascript code <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js" type="text/javascript"></script> <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.1/jquery-ui.min.js" type="text/javascript"></script> <script src="http://jquery-ui.googlecode.com/svn/tags/latest/external/jquery.bgiframe-2.1.1.js" type="text/javascript"></script> <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.1/i18n/jquery-ui-i18n.min.js" type="text/javascript"></script> <script language="javascript" type="text/javascript"> var theSource = '../RegionsAutoComplete.axd?PID=<%= hidden62.value %>' $(function () { $('#<%= txt63.ClientID %>').autocomplete({ source: theSource, minLength: 2, select: function (event, ui) { $('#<%= hidden63.ClientID %>').val(ui.item.id); } }); }); and here is my HTTP Handler Namespace BT.Handlers Public Class RegionsAutoComplete : Implements IHttpHandler Public ReadOnly Property IsReusable() As Boolean Implements System.Web.IHttpHandler.IsReusable Get Return False End Get End Property Public Sub ProcessRequest(ByVal context As System.Web.HttpContext) Implements System.Web.IHttpHandler.ProcessRequest 'the page contenttype is plain text context.Response.ContentType = "application/json" context.Response.ContentEncoding = Encoding.UTF8 'set page caching context.Response.Cache.SetExpires(DateTime.Now.AddHours(24)) context.Response.Cache.SetCacheability(HttpCacheability.Public) context.Response.Cache.SetSlidingExpiration(True) context.Response.Cache.VaryByParams("PID") = True Try ' use the RegionsDataContext Using RegionDC As New DAL.RegionsDataContext ' query the database based on the querysting PID Dim q = (From r In RegionDC.bt_Regions _ Where r.PID = context.Request.QueryString("PID") _ Select r.Region, r.ID) ' now we loop through the array ' and write out the ressults Dim sb As New StringBuilder sb.Append("{") For Each item In q sb.Append("""" & item.Region & """: """ & item.ID & """,") Next sb.Append("}") context.Response.Write(sb.ToString) End Using Catch ex As Exception HealthMonitor.Log(ex, False, "This error occurred while populating the autocomplete handler") End Try End Sub End Class End Namespace The rest of my ASPX page has the appropriate controls as I had this working with the old version of the jQuery library. I'm trying to get it working with the new one because I heard that the "dev" CDN was going to be obsolete. Any help or direction will be greatly appreciated.

    Read the article

  • remove data layer and put into it's own domain

    - by user334768
    I have a SL4 application that uses EF4 & RIA Services. DB is SQL 2008. All is working well. Now I want to put the Database and web services on one domain (A.com) with the web service exposing the same methods available in my working project. (one listed at top of message) Then put a Silverlight application (same one as above) on domain(B.com) and call the web services on A.com. I thought I had a fair understanding of RIA Services. Enough to get the above application working. Now when I say "working" I do mean on my local dev machine. I have yet to deployed as SL4 & .NET 4 application to my hosting site. But I don't think I understand it well enough. I normally create a new business app, add EF then create the RIA DomainService. Add any [Includes] I need, modify my linq queries and run application. And it works. Now I need to break off my data layer and put it on another hosting site (A.com) And put my UI and business logic on another hosting site (B.com) I think I need to do the following : On the Database & web service site: domain(A.com) create application, create EF4, create RIA Services and deploy. At this time, are the methods exposed available as a "WEB SERVICE" to other applications calling by http:// a.com/serviceName.svc address? I think I need to do the following : On the application site : domain(B.com) create a business application (later will need authentication and navigation). How can I create an EF when I don't have access to the database? (I know I do have access but I want know what happens here when I do not have access to the database, but only data provided by a web service) If I can not create an EF how do I create my RIA Service? I hope any one who takes time to help me understands what I'm asking. Sorry so long.

    Read the article

  • Linker Issues with boost::thread under linux using Eclipse and CMake

    - by OcularProgrammer
    I'm in the process of attempting to port some code across from PC to Ubuntu, and am having some issues due to limited experience developing under linux. We use CMake to generate all our build stuff. Under windows I'm making VS2010 projects, and under Linux I'm making Eclipse projects. I've managed to get my OpenCV stuff ported across successfully, but am having major headaches trying to port my threaded boost apps. Just so we're clear, the steps I have followed so-far on a clean Ubuntu 12 installation. (I've done 2 clean re-installs to try and fix potential library cock-ups, now I'm just giving up and asking): Install Eclipse and Eclipse CDT using my package manager Install CMake and CMake Gui using my package manager Install libboost-all-dev using my package manager So-far that's all I've done. I can create the eclipse project using CMake with no errors, so CMake is successfully finding my boost install. When I try and build through eclipse is when I get issues; The app I'm attempting to build uses boost::asio for some UDP I/O and boost::thread to create worker threads for the asio I/O services. I can successfully compile each module, but when I come to link I get spammed with errors such as: /usr/bin/c++ CMakeFiles/RE05DevelopmentDemo.dir/main.cpp.o CMakeFiles/RE05DevelopmentDemo.dir/RE05FusionListener/RE05FusionListener.cpp.o CMakeFiles/RE05DevelopmentDemo.dir/NewEye/NewEye.cpp.o -o RE05DevelopmentDemo -rdynamic -Wl,-Bstatic -lboost_system-mt -lboost_date_time-mt -lboost_regex-mt -lboost_thread-mt -Wl,-Bdynamic /usr/lib/gcc/x86_64-linux-gnu/4.6/../../../../lib/libboost_thread-mt.a(thread.o): In function `void boost::call_once<void (*)()>(boost::once_flag&, void (*)()) [clone .constprop.98]': make[2]: Leaving directory `/home/david/Code/Build/Support/RE05DevDemo' (.text+0xc8): undefined reference to `pthread_key_create' /usr/lib/gcc/x86_64-linux-gnu/4.6/../../../../lib/libboost_thread-mt.a(thread.o): In function `boost::this_thread::interruption_enabled()': (.text+0x540): undefined reference to `pthread_getspecific' make[1]: Leaving directory `/home/david/Code/Build/Support/RE05DevDemo' /usr/lib/gcc/x86_64-linux-gnu/4.6/../../../../lib/libboost_thread-mt.a(thread.o): In function `boost::this_thread::disable_interruption::disable_interruption()': (.text+0x570): undefined reference to `pthread_getspecific' /usr/lib/gcc/x86_64-linux-gnu/4.6/../../../../lib/libboost_thread-mt.a(thread.o): In function `boost::this_thread::disable_interruption::disable_interruption()': (.text+0x59f): undefined reference to `pthread_getspecific' Some Gotchas that I have collected from other StackOverflow posts and have already checked: The boost libs are all present at /usr/lib I am not getting any compile errors for inability to find the boost headers, so they must be getting found. I am trying to link statically, but I believe eclipse should be passing the correct arguments to make that happen since my CMakeLists.txt includes SET(Boost_USE_STATIC_LIBS ON) I'm officially out of ideas here, I have tried doing local builds of boost and a bunch of other stuff with no more success. I even re-installed Ubuntu to ensure I haven't completely fracked the libs directories and links with multiple weird versions or anything else. Any help would be muchly appreciated.

    Read the article

  • WPF: Improving Performance for Running on Older PCs

    - by Phil Sandler
    So, I'm building a WPF app and did a test deployment today, and found that it performed pretty poorly. I was surprised, as we are really not doing much in the way of visual effects or animations. I deployed on two machines: the fastest and the slowest that will need to run the application (the slowest PC has an Intel Celeron 1.80GHz with 2GB RAM). The application ran pretty well on the faster machine, but was choppy on the slower machine. And when I say "choppy", I mean the cursor jumped even just passing it over any open window of the app that had focus. I opened the Task Manager Performance window, and could see that the CPU usage jumped whenever the app had focus and the cursor was moving over it. If I gave focus to another (e.g. Excel), the CPU usage went back down after a second. This happened on both machines, but the choppiness was only noticeable on the slower machine. I had very limited time to tinker on the deployment machines, so didn't do a lot of detailed testing. The app runs fine on my development machine, but I also see the CPU spiking up to 10% there, just running the cursor over the window. I downloaded the WPF performance tool from MS and have been tinkering with it (on my dev machine). The docs say this about the "Frame Rate" metric in the Perforator tool: For applications without animation, this value should be near 0. The app is not doing any heavy animation, but the frame rate stays near 50 when the cursor is over any window. The screens I tested on have column headers in a grid that "highlight" and buttons that change color and appearance when scrolled over. Even moving the mouse on blank areas of the windows cause the same Frame rate and CPU usage (doesn't seem to be related to these minor animations). (Also, I am unable to figure out how to get anything but the two default tools--Perforator and Visual Profiler--installed into the WPF performance tool. That is probably a separate question). I also have Redgate's profiling tool, but I'm not sure if that can shed any light on rendering performance. So, I realize this is not an easy thing to troubleshoot without specifics or sample code (which I can't post). My questions are: What are some general things to look for (or avoid) in the code to improve performance? What steps can I take using the WPF performance tool to narrow down the problem? Is the PC spec listed above (Intel Celeron 1.80GHz with 2GB RAM) too slow to be running even vanilla WPF applications?

    Read the article

  • Macports 1.8.2 fails to build db46 on os x 1.6.3

    - by themoch
    i'm trying to put a dev environment on my mac, and to do so i need to install several packages which require db46 when running sudo port install db46 i get the following error: ---> Computing dependencies for db46 ---> Fetching db46 ---> Attempting to fetch patch.4.6.21.1 from http://www.oracle.com/technology/products/berkeley-db/db/update/4.6.21/ ---> Attempting to fetch patch.4.6.21.2 from http://www.oracle.com/technology/products/berkeley-db/db/update/4.6.21/ ---> Attempting to fetch patch.4.6.21.3 from http://www.oracle.com/technology/products/berkeley-db/db/update/4.6.21/ ---> Attempting to fetch patch.4.6.21.4 from http://www.oracle.com/technology/products/berkeley-db/db/update/4.6.21/ ---> Attempting to fetch db-4.6.21.tar.gz from http://distfiles.macports.org/db4/4.6.21_6 ---> Verifying checksum(s) for db46 ---> Extracting db46 ---> Applying patches to db46 ---> Configuring db46 ---> Building db46 Error: Target org.macports.build returned: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_databases_db46/work/db-4.6.21/build_unix" && /usr/bin/make -j2 all " returned error 2 Command output: ../dist/../libdb_java/db_java_wrap.c:9464: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9487: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jlong' ../dist/../libdb_java/db_java_wrap.c:9509: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9532: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jlong' ../dist/../libdb_java/db_java_wrap.c:9563: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jint' ../dist/../libdb_java/db_java_wrap.c:9588: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jlong' ../dist/../libdb_java/db_java_wrap.c:9613: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jint' ../dist/../libdb_java/db_java_wrap.c:9638: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9666: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jlong' ../dist/../libdb_java/db_java_wrap.c:9691: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jlong' ../dist/../libdb_java/db_java_wrap.c:9716: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9739: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9771: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9796: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9819: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9842: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9867: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jobject' ../dist/../libdb_java/db_java_wrap.c:9899: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9920: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9943: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9966: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jstring' ../dist/../libdb_java/db_java_wrap.c:9991: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jint' ../dist/../libdb_java/db_java_wrap.c:10010: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:10046: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:10071: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' make: *** [db_java_wrap.lo] Error 1 make: *** Waiting for unfinished jobs.... Note: Some input files use unchecked or unsafe operations. Note: Recompile with -Xlint:unchecked for details. cd ./classes && jar cf ../db.jar ./com/sleepycat Error: Status 1 encountered during processing. i have removed my /usr/local folder completely and it does not seem to help

    Read the article

  • HTTP Handlers in Win2008/IIS7

    - by Keith Barrows
    We are migrating our web sites from Win2003/IIS6 to Win2008/IIS7. Our .NET code is in a WAP form with compiled binaries. I do dev work on a Win7/IIS7 box so had to learn early how to set up HTTP Handlers in this newer environment. What I have that has worked fine on my box is: <system.webServer> <handlers> <remove name="WebServiceHandlerFactory-Integrated" /> <remove name="ScriptHandlerFactory" /> <remove name="ScriptHandlerFactoryAppServices" /> <remove name="ScriptResource" /> <add name="RivWorks" path="*.riv" verb="*" modules="IsapiModule" scriptProcessor="C:\Windows\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" preCondition="classicMode,runtimeVersionv2.0,bitness32" /> <add name="RivWorks2" path="*.riv2" verb="*" modules="IsapiModule" scriptProcessor="C:\Windows\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" preCondition="classicMode,runtimeVersionv2.0,bitness32" /> <add name="ScriptHandlerFactory" verb="*" path="*.asmx" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="ScriptResource" verb="GET,HEAD" path="ScriptResource.axd" preCondition="integratedMode" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </handlers> All I am getting on the new web site when I try to call into the *.riv handler is: 404 - File or directory not found. The resource you are looking for might have been removed, had its name changed, or is temporarily unavailable. OK. You see interesting things when writing out these questions. Our server is setup in integrated mode and runs on a x64 system. So, I changed the precondition clause to: preCondition="integratedMode,runtimeVersionv2.0,bitness64" Now I get this instead: 500 - Internal server error. There is a problem with the resource you are looking for, and it cannot be displayed. Any ideas of what I should be doing, where I should be looking? TIA

    Read the article

  • Generics vs Object performance

    - by Risho
    I'm doing practice problems from MCTS Exam 70-536 Microsft .Net Framework Application Dev Foundation, and one of the problems is to create two classes, one generic, one object type that both perform the same thing; in which a loop uses the class and iterated over thousand times. And using the timer, time the performance of both. There was another post at C# generics question that seeks the same questoion but nonone replied. Basically if in my code I run the generic class first it takes loger to process. If I run the object class first than the object class takes longer to process. The whole idea was to prove that generics perform faster. I used the original users code to save me some time. I didn't particularly see anything wrong with the code and was puzzled by the outcome. Can some one explain why the unusual results? Thanks, Risho Here is the code: class Program { class Object_Sample { public Object_Sample() { Console.WriteLine("Object_Sample Class"); } public long getTicks() { return DateTime.Now.Ticks; } public void display(Object a) { Console.WriteLine("{0}", a); } } class Generics_Samle<T> { public Generics_Samle() { Console.WriteLine("Generics_Sample Class"); } public long getTicks() { return DateTime.Now.Ticks; } public void display(T a) { Console.WriteLine("{0}", a); } } static void Main(string[] args) { long ticks_initial, ticks_final, diff_generics, diff_object; Object_Sample OS = new Object_Sample(); Generics_Samle<int> GS = new Generics_Samle<int>(); //Generic Sample ticks_initial = 0; ticks_final = 0; ticks_initial = GS.getTicks(); for (int i = 0; i < 50000; i++) { GS.display(i); } ticks_final = GS.getTicks(); diff_generics = ticks_final - ticks_initial; //Object Sample ticks_initial = 0; ticks_final = 0; ticks_initial = OS.getTicks(); for (int j = 0; j < 50000; j++) { OS.display(j); } ticks_final = OS.getTicks(); diff_object = ticks_final - ticks_initial; Console.WriteLine("\nPerformance of Generics {0}", diff_generics); Console.WriteLine("Performance of Object {0}", diff_object); Console.ReadKey(); } }

    Read the article

  • What are some things you'd like fresh college grads to know?

    - by bradhe
    So I proposed this to the Reddit community and I'd like to get SO's perspective on this. This is pretty much the copypasta of what I put there. I was thinking about this last night and thought it would be neat to compile a list. I'm still a pretty fresh college grad -- been in industry for 2 years -- but I think that I might have a few interesting things to lend. You don't know as much as you think you do. Somehow, college students think they know a lot more than they do (or maybe that was just me). Likewise, they think they can do more than they actually can. You should fairly assess your skills. QA people are not out to get you. Humans introduce bugs to code. It's not (nescessarily) a personal reflection on you and your skills if your code has a bug and it's caught by the QA/testing team. Listen to your senior (developers). They are not actually fuddy duddies who don't know about the new L337 hax in Ruby (okay, sometimes they are, but still...). They have a wealth of knowledge that you can learn from and it's in your best interest to do so. You will most likely not be doing what you want to for a while. This is mostly true in the corporate world -- startups are a different matter. Also, this is due to more than just the economy, man! Junior devs need to earn their keep, so to speak. Everyone wants to be lead dev on the next project and there are a lot of people in line ahead of you! For every elite developer there are 100 average developers. Joel Spolsky, I'm looking at you. Somehow this concept of ninja coders has really ingrained itself in our culture. While I encourage you to be the best you can be don't be disappointed if people aren't writing blog posts about you in the near future. Anyone else have anything they would see added to this list?

    Read the article

  • Can I have a workspace that is both a git workspace and a svn workspace?

    - by Troy
    I have checked out now a local working copy of a codebase that lives in an svn repo. It's a big Java project that I use Eclipse to develop in. Eclipse of course builds everything on the fly, in it's own way with all the binaries ending up in [project root]/bin. That's perfectly fine with me, for development, but when the build runs on the build server, it looks quite a lot different (maven build, binaries end up in a different directory structure, etc). Sometimes I need to recreate the build server environment on my local development system to debug the build or what have you, so I usually end up downloading an entirely new working copy into a new workspace and running the build from there (prevents cluttering my development workspace with all the build artifacts and dirtying up the working copy). Of course sometimes I'm interested in running the full build on code that I don't want to check in yet, so I will manually copy over the "development" workspace onto the "build" workspace. Besides taking a lot of extra time copying a lot of files that I don't actually need (just overlaying the new over the old), this also screws up my svn metadata, meaning that I can't check in changes from that "build workspace" working copy, and I often end up having to re-download the code to get it back into a known state. So I'm thinking I make my svn working copy a local git repo, then "check out" the in-development code from the svn working copy/git master, into the local build workspace. Then I can build, revert my changes, have all the advantages of a version controlled working copy in the build workspace. Then if I need to make changes to the build, push those back into the git master (which is also a svn working copy), then check them into the main svn repo. |-------------| |main svn repo| <------- |---------------------| |-------------| |svn working copy | <------- |--------------------| | (svn dev workspace/ | | non-svn-versioned | | git master) | | build workspace | |---------------------| | (git working copy) | |--------------------| Just switching everything to git would obviously be better, but, big company, too many people using svn, too costly to change everything, etc. We're stuck with svn as the main repo for now. BTW, I know there is a maven plugin for Eclipse and everything, I'm mainly interested to know if there is a way to maintain a workspace that is both a git working copy and an svn working copy. Actually any distributed version control system would probably work (hg possibly?). Advice? How does everybody else handle this situation of having a to manage both a "development" build process and a "production" build process?

    Read the article

  • data from few MySQL tables sorted by ASC

    - by Andrew
    In the dbase I 've few tables named as aaa_9xxx, aaa_9yyy, aaa_9zzz. I want to find all data with a specified DATE and show it with the TIME ASC. First, I must find a tables in the dbase: $STH_1a = $DBH->query("SELECT table_name FROM information_schema.tables WHERE table_name LIKE 'aaa\_9%' "); foreach($STH_1a as $row) { $table_name_s1[] = $row['table_name']; } Second, I must find a data wit a concrete date and show it with TIME ASC: foreach($table_name_s1 as $table_name_1) { $STH_1a2 = $DBH->query("SELECT * FROM `$table_name_1` WHERE date = '2011-11-11' ORDER BY time ASC "); while ($row = $STH_1a2->fetch(PDO::FETCH_ASSOC)) { echo " ".$table_name_1."-".$row['time']."-".$row['ei_name']." <br>"; } } .. but it shows the data sorted by tables name, then by TIME ASC. I must to have all this data (from all tables) sorted by TIME ASC. Thank You dev-null-dweller, Andrew Stubbs and Jaison Erick for your help. I test the Erick solution : foreach($STH_1a as $row) { $stmts[] = sprintf('SELECT * FROM %s WHERE date="%s"', $row['table_name'], '2011-11-11'); } $stmt = implode("\nUNION\n", $stmts); $stmt .= "\nORDER BY time ASC"; $STH_1a2 = $DBH->query($stmt); while ($row_1a2 = $STH_1a2->fetch(PDO::FETCH_ASSOC)) { echo " ".$row['table_name']."-".$row_1a2['time']."-".$row_1a2['ei_name']." <br>"; } it's working but I've problem with 'table_name' - it's always the LAST table name. //---------------------------------------------------------------------- end the ending solution with all fixes, thanks all for your help, :)) foreach($STH_1a as $row) { $stmts[] = sprintf("SELECT *, '%s' AS table_name FROM %s WHERE date='%s'", $row['table_name'], $row['table_name'], '2011-11- 11'); } $stmt = implode("\nUNION\n", $stmts); $stmt .= "\nORDER BY time ASC"; $STH_1a2 = $DBH->query($stmt); while ($row_1a2 = $STH_1a2->fetch(PDO::FETCH_ASSOC)) { echo " ".$row_1a2['table_name']."-".$row_1a2['time']."-".$row_1a2['ei_name']." <br>"; }

    Read the article

  • Binding on a port with netpipes/netcat

    - by mindas
    I am trying to write a simple bash script that is listening on a port and responding with a trivial HTTP response. My specific issue is that I am not sure if the port is available and in case of bind failure I fall back to next port until bind succeeds. So far to me the easiest way to achieve this was something like: for (( i=$PORT_BASE; i < $(($PORT_BASE+$PORT_RANGE)); i++ )) do if [ $DEBUG -eq 1 ] ; then echo trying to bind on $i fi /usr/bin/faucet $i --out --daemon echo test 2>/dev/null if [ $? -eq 0 ] ; then #success? port=$i if [ $DEBUG -eq 1 ] ; then echo "bound on port $port" fi break fi done Here I am using faucet from netpipes Ubuntu package. The problem with this is that if I simply print "test" to the output, curl complains about non-standard HTTP response (error code 18). That's fair enough as I don't print HTTP-compatible response. If I replace echo test with echo -ne "HTTP/1.0 200 OK\r\n\r\ntest", curl still complains: user@server:$ faucet 10020 --out --daemon echo -ne "HTTP/1.0 200 OK\r\n\r\ntest" ... user@client:$ curl ip.of.the.server:10020 curl: (56) Failure when receiving data from the peer I think the problem lies in how faucet is printing the response and handling the connection. For example if I do the server side in netcat, curl works fine: user@server:$ echo -ne "HTTP/1.0 200 OK\r\n\r\ntest\r\n" | nc -l 10020 ... user@client:$ curl ip.of.the.server:10020 test user@client:$ I would be more than happy to replace faucet with netcat in my main script, but the problem is that I want to spawn independent server process to be able to run client from the same base shell. faucet has a very handy --daemon parameter as it forks to background and I can use $? (exit status code) to check if bind succeeded. If I was to use netcat for a similar purpose, I would have to fork it using & and $? would not work. Does anybody know why faucet isn't responding correctly in this particular case and/or can suggest a solution to this problem. I am not married neither to faucet nor netcat but would like the solution to be implemented using bash or it's utilities (as opposed to write something in yet another scripting language, such as Perl or Python).

    Read the article

  • Download a file using cocoa

    - by dododedodonl
    Hi All, I want to download a file to the downloads folder. I searched google for this and found the NSURLDownload class. I've read the page in the dev center and created this code (with some copy and pasting) this code: @implementation Downloader @synthesize downloadResponse; - (void)startDownloadingURL:(NSString*)downloadUrl destenation:(NSString*)destenation { // create the request NSURLRequest *theRequest=[NSURLRequest requestWithURL:[NSURL URLWithString:downloadUrl] cachePolicy:NSURLRequestUseProtocolCachePolicy timeoutInterval:60.0]; // create the connection with the request // and start loading the data NSURLDownload *theDownload=[[NSURLDownload alloc] initWithRequest:theRequest delegate:self]; if (!theDownload) { NSLog(@"Download could not be made..."); } } - (void)download:(NSURLDownload *)download decideDestinationWithSuggestedFilename:(NSString *)filename { NSString *destinationFilename; NSString *homeDirectory=NSHomeDirectory(); destinationFilename=[[homeDirectory stringByAppendingPathComponent:@"Desktop"] stringByAppendingPathComponent:filename]; [download setDestination:destinationFilename allowOverwrite:NO]; } - (void)download:(NSURLDownload *)download didFailWithError:(NSError *)error { // release the connection [download release]; // inform the user NSLog(@"Download failed! Error - %@ %@", [error localizedDescription], [[error userInfo] objectForKey:NSErrorFailingURLStringKey]); } - (void)downloadDidFinish:(NSURLDownload *)download { // release the connection [download release]; // do something with the data NSLog(@"downloadDidFinish"); } - (void)setDownloadResponse:(NSURLResponse *)aDownloadResponse { [aDownloadResponse retain]; [downloadResponse release]; downloadResponse = aDownloadResponse; } - (void)download:(NSURLDownload *)download didReceiveResponse:(NSURLResponse *)response { // reset the progress, this might be called multiple times bytesReceived = 0; // retain the response to use later [self setDownloadResponse:response]; } - (void)download:(NSURLDownload *)download didReceiveDataOfLength:(unsigned)length { long long expectedLength = [[self downloadResponse] expectedContentLength]; bytesReceived = bytesReceived+length; if (expectedLength != NSURLResponseUnknownLength) { percentComplete = (bytesReceived/(float)expectedLength)*100.0; NSLog(@"Percent - %f",percentComplete); } else { NSLog(@"Bytes received - %d",bytesReceived); } } -(NSURLRequest *)download:(NSURLDownload *)download willSendRequest:(NSURLRequest *)request redirectResponse:(NSURLResponse *)redirectResponse { NSURLRequest *newRequest=request; if (redirectResponse) { newRequest=nil; } return newRequest; } @end But my problem is now, it doesn't appear on the desktop as specified. And I want to put it in downloads and not on the desktop... What do I have to do?

    Read the article

  • Windows MAchine Debugging

    - by PrettyFlower
    I've been learning how to program for Windows for some time now and am getting pretty comfy with COM. I had thought to go over to Linux and do some C++ programming there and I wished to run Rosetta Commons so I installed Fedora. I had tried installing Ubuntu a few months ago and things got messy. I had a glitch, maybe caused by one of the live cd creators, my video card or something I don't know. Who Crashed suggested it was my video card and I had regular messages about ntfs.sys and page file issues. At any rate I just installed Fedora and the same thing is happening again. I would like to think with the twenty five years of doing this that I might finally make some headway into debugging my system. I think I may have overlooked a lot of what could be done in favor of simply uninstalling, reinstalling and formatting and starting from scratch. I have opened up the folder windows debugging tools, quite accidentally and just before I was going to clean sweep again, and I found KD and WinDbg. I had never seen these before and I felt that maybe I should look into this. I am quite familiar with the modern machine that is known as the computer, I know what a Kernel is and am now pretty familiar with at the very least Windows Operating System Services. I wish to begin tracking my own machines errors. I understand that most kernel debugging is done on a second machine but I don't have one. And also I understand the goal of the debugger seems to be less about run of the mill errors and more about development time strategies but I'm sure there is more to this. This is my first go at this and I thought maybe I could get some suggestions on where to go from here. I would really like to learn ways to fix my machine and also maybe pick up some tricks on the dev side as well. I hope this isn't too broad a question or too generalized. I'm really just looking for the keywords and an overview of the more routine strategies used. thx

    Read the article

  • How can I obtain the IPv4 address of the client?

    - by Dr Dork
    Hello! I'm prepping for a simple work project and am trying to familiarize myself with the basics of socket programming in a Unix dev environment. At this point, I have some basic server side code setup to listen for incoming TCP connection requests from clients after the parent socket has been created and is set to listen... int sockfd, newfd; unsigned int len; socklen_t sin_size; char msg[]="Test message sent"; char buf[MAXLEN]; int st, rv; struct addrinfo hints, *serverinfo, *p; struct sockaddr_storage client; char ip[INET6_ADDRSTRLEN]; . . //parent socket creation and listen code omitted for simplicity . //wait for connection requests from clients while(1) { //Returns the socketID and address of client connecting to socket if( ( newfd = accept(sockfd, (struct sockaddr *)&client, &len) ) == -1 ){ perror("Accept"); exit(-1); } if( (rv = recv(newfd, buf, MAXLEN-1, 0 )) == -1) { perror("Recv"); exit(-1); } struct sockaddr_in *clientAddr = ( struct sockaddr_in *) get_in_addr((struct sockaddr *)&client); inet_ntop(client.ss_family, clientAddr, ip, sizeof ip); printf("Receive from %s: query type is %s\n", ip, buf); if( ( st = send(newfd, msg, strlen(msg), 0)) == -1 ) { perror("Send"); exit(-1); } //ntohs is used to avoid big-endian and little endian compatibility issues printf("Send %d byte to port %d\n", ntohs(clientAddr->sin_port) ); close(newfd); } } I found the get_in_addr function online and placed it at the top of my code and use it to obtain the IP address of the client connecting... // get sockaddr, IPv4 or IPv6: void *get_in_addr(struct sockaddr *sa) { if (sa->sa_family == AF_INET) { return &(((struct sockaddr_in*)sa)->sin_addr); } return &(((struct sockaddr_in6*)sa)->sin6_addr); } but the function always returns the IPv6 IP address since thats what the sa_family property is set as. My question is, is the IPv4 IP address stored anywhere in the data I'm using and, if so, how can I access it? Thanks so much in advance for all your help!

    Read the article

  • rake db:create not working for legacy rails app (2.3.5) using MySQL (5.5.28)

    - by ridicter
    I'm a new Rails Developer, and I'm working on a legacy Rails app. Whenever I run the rake db:create command, I get an error that the database couldn't be created. I have found many StackOverflow questions related to this, but in troubleshooting nearly all permutations of solutions, I couldn't resolve the issue. I created the three Dbs (dev, prod, test), created the user with all access privileges to these dbs, and ran rake db:create. I'm running Mac OS X Lion, MySQL 5.5.28, Rails 2.3.5, Ruby 1.8.7. Here are my settings development: adapter: mysql encoding: utf8 database: adva_development username: adva password: **** host: localhost socket: /tmp/mysql.sock Here's the error: Couldn't create database for {"adapter"=>"mysql", "username"=>"adva", "host"=>"localhost", "encoding"=>"utf8", "database"=>"adva_development", "socket"=>"/tmp/mysql.sock", "password"=>"****"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation) I have done the following troubleshooting: Verified user and password are correct, and the user has access to the DB. (Double checked user access with SELECT * FROM mysql.db WHERE Db = 'adva_development' \G; User has all privileges.) Verify the socket is correct. I don't really understand sockets, but I can plainly see it at /tmp/mysql.sock. Checked collation and character set. I found out I had created the DB in latin charset and collation, so I recreated them. I ran show variables like "collation_database"; and show variables like "character_set_database"; and came back with utf8 and utf8_unicode_ci respectively. I followed the instructions in this question. After uninstalling mysql gem, I ran the following but came up with the same error: gem install --no-rdoc --no-ri mysql -- --with-mysql-dir=/usr/local/mysql-5.5.28-osx10.6-x86_64/bin --with-mysql-config=/usr/local/mysql-5.5.28-osx10.6-x86_64/bin/mysql_config Following Matt's suggestion, here's what a rake --trace db:create reveals: ** Invoke db:create (first_time) ** Invoke db:load_config (first_time) ** Invoke rails_env (first_time) ** Execute rails_env ** Execute db:load_config ** Execute db:create Couldn't create database for {"database"=>"adva_development", "adapter"=>"mysql", "host"=>"127.0.0.1", "password"=>"woof2adva", "username"=>"adva", "encoding"=>"utf8"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation) After 3 days and six or seven hours, I have pretty much run out of options. I tried various random things, like replacing localhost with 127.0.0.1 to no avail. Could there be something wrong related to my specific environment? Mac OS X Lion + MySQL 5.5.28? I plan on trying on setting up everything in a Linux environment. Thanks!

    Read the article

  • How can I obtain the IP address of my server program?

    - by Dr Dork
    Hello! This question is related to another question I just posted. I'm prepping for a simple work project and am trying to familiarize myself with the basics of socket programming in a Unix dev environment. At this point, I have some basic server side code and client side code setup to communicate. Currently, my client code successfully connects to the server code and the server code sends it a test message, then both quit out. Perfect! That's exactly what I wanted to accomplish. Now I'm playing around with the functions used to obtain info about the two environments (server and client). I'd like to obtain my server program's IP address. Here's the code I currently have to do this, but it's not working... int sockfd; unsigned int len; socklen_t sin_size; char msg[]="test message"; char buf[MAXLEN]; int st, rv; struct addrinfo hints, *serverinfo, *p; struct sockaddr_storage client; char s[INET6_ADDRSTRLEN]; char ip[INET6_ADDRSTRLEN]; //zero struct memset(&hints,0,sizeof(hints)); hints.ai_family = AF_UNSPEC; hints.ai_socktype = SOCK_STREAM; hints.ai_flags = AI_PASSIVE; //get the server info if((rv = getaddrinfo(NULL, SERVERPORT, &hints, &serverinfo ) != 0)){ perror("getaddrinfo"); exit(-1); } // loop through all the results and bind to the first we can for( p = serverinfo; p != NULL; p = p->ai_next) { //Setup the socket if( (sockfd = socket( p->ai_family, p->ai_socktype, p->ai_protocol )) == -1 ) { perror("socket"); continue; } //Associate a socket id with an address to which other processes can connect if(bind(sockfd, p->ai_addr, p->ai_addrlen) == -1){ close(sockfd); perror("bind"); continue; } break; } if( p == NULL ){ perror("Fail to bind"); } inet_ntop(p->ai_family, get_in_addr((struct sockaddr *)p->ai_addr), s, sizeof(s)); printf("Server has TCP Port %s and IP Address %s\n", SERVERPORT, s); and the output for the IP is always empty... server has TCP Port 21412 and IP Address :: any ideas for what I'm missing? thanks in advance for your help! this stuff is really complicated at first.

    Read the article

  • A versioning workflow for multiple similar (but not identical) deployments

    - by rs77
    I'm currently employed at a small non-tech organisation and have been given the role of coding the organisations' website. While I have enjoyed the task and have learnt much with web dev I've encountered a few issues that I'm hoping someone will be able to help with me or at least point me in the right direction on. A little background: The site I work on has subdomains that each have their own separate WordPress installation on - as this has been the easiest "backend" admin panel for the type of user who will be responsible for updating content (etc). Within the organisation I work under the Marketing Manager (MM) and I code according to his style guide and wire frames. While we have been working with only one subdomain since the beginning of the year the project has been relatively simple and straightforward. However, lately the workflow is becoming a little more complicated as our original subdomain has been copied over to the other subdomains. Each of the new subdomains receives minor edits to their stylesheets (eg. different pictures for background, slightly different colours here and there, etc). The issue: At the moment managing all the different subdomains has been "bearable", but the straw that's braking the camel's back at the moment has been the slight reversions the MM has required now that the CEO has seen the final product. The problem I'm having with reversions in stylesheets is that the CEO will one week state that he likes change "X" and then as the MM and I continue to modify the site (to now "Z"), will another week state that he wants us to change "X" to "W" but keeping most of the changes made in "Y". What I'm looking for is something that allows for: tracking file changes reverting changes made (or reverting back to 'a' from 'e' but including changes 'b' & 'c') easily upload necessary files to their respective WP-theme installation Does anything out there come close to addressing these issues? If so, what? Thanks for any help! PS - I'm learning Git at the moment and it seems to do the "tracking file changes" quite nicely. Haven't learnt about the reverting changes bit yet, though. Maybe for my final point I'm thinking of creating a shell script to automatically upload the files to their folders. Does Git do this too though? Addendum (alexbbrown) I had a similar problem: I ran a custom version of mediawiki where I installed various extensions in the versioned core (with svn). Each of the extensions required an section in the confit file, but the confit file also needed local configuration for each of several deployments. I could have implemented it using includes, but they would not be versioned; and rebasing branches each time is a chore. +50 experience points for a good answer in git.

    Read the article

  • IE dynamic image caching issue?

    - by rdevitt
    I have an html page that is loading multiple iframes into which are embedded dynamic images created from a Tomcat server page (.jsp). This works as expected from Chrome and Firefox, but for some reason IE displays all of the images the same (as the first image). I've created an example: http://coupondiscounts.com/dev/jsImageTest.html jsImageTest.html -- This page simply loads 6 instances of the testImageFrame.html page in separate iframes one-at-a-time, using Javascript. testImageFrame.html -- This is the page loaded in all the iframes. It contains only a JavaScript block that writes out the current time and an img tag. The img is dynamically generated by a .jsp page on a different server. It should be a white box on a black background. In the box are the current time (from the Tomcat server using Java) and a randomly created double between 0 & 1. What happens (in IE): The page almost instantly loads four identical iframes. Depending on the speed of your machine, the JavaScript times may vary by a second or two. The images' times will all be the same as will be the random number. This holds true even for the last two iframes which are loaded 5 and 10 seconds after the others (using JavaScript setTimeout()). What should happen (as it does in Chrome and FF): The page loads the same 4 iframes, but the random numbers in the images will be different. The times in the images occasionally span a second as well. Anyone have a clue as to what's going on here? Is IE doing some strange caching? The image header has "no-cache," "no-store" and all that. I've tried it on IE6 and 7. You can use the "Next" button to create another iframe. In IE, the images are always the same. Notes: I don't really need iframes, just the images, but if I only use img tags, the problem appears in Chrome and FF as well. I also don't really need to load these iframes dynamically, I was just trying to abstract the issue further and allow a delayed load for the latter 2 images.

    Read the article

  • What You Need to Know About Windows 8.1

    - by Chris Hoffman
    Windows 8.1 is available to everyone starting today, October 19. The latest version of Windows improves on Windows 8 in every way. It’s a big upgrade, whether you use the desktop or new touch-optimized interface. The latest version of Windows has been dubbed “an apology” by some — it’s definitely more at home on a desktop PC than Windows 8 was. However, it also offers a more fleshed out and mature tablet experience. How to Get Windows 8.1 For Windows 8 users, Windows 8.1 is completely free. It will be available as a download from the Windows Store — that’s the “Store” app in the Modern, tiled interface. Assuming upgrading to the final version will be just like upgrading to the preview version, you’ll likely see a “Get Windows 8.1″ pop-up that will take you to the Windows Store and guide you through the download process. You’ll also be able to download ISO images of Windows 8.1, so can perform a clean install to upgrade. On any new computer, you can just install Windows 8.1 without going through Windows 8. New computers will start to ship with Windows 8.1 and boxed copies of Windows 8 will be replaced by boxed copies of Windows 8.1. If you’re using Windows 7 or a previous version of Windows, the update won’t be free. Getting Windows 8.1 will cost you the same amount as a full copy of Windows 8 — $120 for the standard version. If you’re an average Windows 7 user, you’re likely better off waiting until you buy a new PC with Windows 8.1 included rather than spend this amount of money to upgrade. Improvements for Desktop Users Some have dubbed Windows 8.1 “an apology” from Microsoft, although you certainly won’t see Microsoft referring to it this way. Either way, Steven Sinofsky, who presided over Windows 8′s development, left the company shortly after Windows 8 was released. Coincidentally, Windows 8.1 contains many features that Steven Sinofsky and Microsoft refused to implement. Windows 8.1 offers the following big improvements for desktop users: Boot to Desktop: You can now log in directly to the desktop, skipping the tiled interface entirely. Disable Top-Left and Top-Right Hot Corners: The app switcher and charms bar won’t appear when you move your mouse to the top-left or top-right corners of the screen if you enable this option. No more intrusions into the desktop. The Start Button Returns: Windows 8.1 brings back an always-present Start button on the desktop taskbar, dramatically improving discoverability for new Windows 8 users and providing a bigger mouse target for remote desktops and virtual machines. Crucially, the Start menu isn’t back — clicking this button will open the full-screen Modern interface. Start menu replacements will continue to function on Windows 8.1, offering more traditional Start menus. Show All Apps By Default: Luckily, you can hide the Start screen and its tiles almost entirely. Windows 8.1 can be configured to show a full-screen list of all your installed apps when you click the Start button, with desktop apps prioritized. The only real difference is that the Start menu is now a full-screen interface. Shut Down or Restart From Start Button: You can now right-click the Start button to access Shut down, Restart, and other power options in just as many clicks as you could on Windows 7. Shared Start Screen and Desktop Backgrounds; Windows 8 limited you to just a few Steven Sinofsky-approved background images for your Start screen, but Windows 8.1 allows you to use your desktop background on the Start screen. This can make the transition between the Start screen and desktop much less jarring. The tiles or shortcuts appear to be floating above the desktop rather than off in their own separate universe. Unified Search: Unified search is back, so you can start typing and search your programs, settings, and files all at once — no more awkwardly clicking between different categories when trying to open a Control Panel screen or search for a file. These all add up to a big improvement when using Windows 8.1 on the desktop. Microsoft is being much more flexible — the Start menu is full screen, but Microsoft has relented on so many other things and you’d never have to see a tile if you didn’t want to. For more information, read our guide to optimizing Windows 8.1 for a desktop PC. These are just the improvements specifically for desktop users. Windows 8.1 includes other useful features for everyone, such as deep SkyDrive integration that allows you to store your files in the cloud without installing any additional sync programs. Improvements for Touch Users If you have a Windows 8 or Windows RT tablet or another touch-based device you use the interface formerly known as Metro on, you’ll see many other noticeable improvements. Windows 8′s new interface was half-baked when it launched, but it’s now much more capable and mature. App Updates: Windows 8′s included apps were extremely limited in many cases. For example, Internet Explorer 10 could only display ten tabs at a time and the Mail app was a barren experience devoid of features. In Windows 8.1, some apps — like Xbox Music — have been redesigned from scratch, Internet Explorer allows you to display a tab bar on-screen all the time, while apps like Mail have accumulated quite a few useful features. The Windows Store app has been entirely redesigned and is less awkward to browse. Snap Improvements: Windows 8′s Snap feature was a toy, allowing you to snap one app to a small sidebar at one side of your screen while another app consumed most of your screen. Windows 8.1 allows you to snap two apps side-by-side, seeing each app’s full interface at once. On larger displays, you can even snap three or four apps at once. Windows 8′s ability to use multiple apps at once on a tablet is compelling and unmatched by iPads and Android tablets. You can also snap two of the same apps side-by-side — to view two web pages at once, for example. More Comprehensive PC Settings: Windows 8.1 offers a more comprehensive PC settings app, allowing you to change most system settings in a touch-optimized interface. You shouldn’t have to use the desktop Control Panel on a tablet anymore — or at least not as often. Touch-Optimized File Browsing: Microsoft’s SkyDrive app allows you to browse files on your local PC, finally offering a built-in, touch-optimized way to manage files without using the desktop. Help & Tips: Windows 8.1 includes a Help+Tips app that will help guide new users through its new interface, something Microsoft stubbornly refused to add during development. There’s still no “Modern” version of Microsoft Office apps (aside from OneNote), so you’ll still have to head to use desktop Office apps on tablets. It’s not perfect, but the Modern interface doesn’t feel anywhere near as immature anymore. Read our in-depth look at the ways Microsoft’s Modern interface, formerly known as Metro, is improved in Windows 8.1 for more information. In summary, Windows 8.1 is what Windows 8 should have been. All of these improvements are on top of the many great desktop features, security improvements, and all-around battery life and performance optimizations that appeared in Windows 8. If you’re still using Windows 7 and are happy with it, there’s probably no reason to race out and buy a copy of Windows 8.1 at the rather high price of $120. But, if you’re using Windows 8, it’s a big upgrade no matter what you’re doing. If you buy a new PC and it comes with Windows 8.1, you’re getting a much more flexible and comfortable experience. If you’re holding off on buying a new computer because you don’t want Windows 8, give Windows 8.1 a try — yes, it’s different, but Microsoft has compromised on the desktop while making a lot of improvements to the new interface. You just might find that Windows 8.1 is now a worthwhile upgrade, even if you only want to use the desktop.     

    Read the article

  • MySQL Connect: What to Expect From the Wondrous Land of MySQL Cluster

    - by Mat Keep
    The MySQL Connect conference is only a couple of weeks away, with MySQL engineers, support teams, consultants and community aces busy putting the final touches to their talks. There will be many exciting new announcements and sharing of best practices at the conference, covering the range of MySQL technologies. MySQL Cluster will a big part of this, so I wanted to share some key sessions for those of you who plan on attending, as well as some resources for those who are not lucky enough to be able to make the trip, but who can't afford to miss the key news. Of course, this is no substitute to actually being there….and the good news is that registration is still open ;-) Roadmap: Whats New in MySQL Cluster Saturday 29th, 1300-1400, in Golden Gate room 5.                                                                                        Bernd Ocklin, director of MySQL Cluster development, and myself will be taking a look at what follows the latest MySQL Cluster 7.2 release. I don't want to give to much away - lets just say its not often you can add powerful new functionality to a product while at the same time making life radically simpler for its users. For those not making it to the Conference, a live webinar repeating the talk is scheduled for Thursday 25th October at 09.00 pacific time. Hold the date, registration will be open for that soon and published to our MySQL Webinars page Best Practices Getting Started with MySQL Cluster, Hands-On Lab Saturday 29th, 1600-1700, in Plaza Room A.                                                              Santo Leto, one of our lead MySQL Cluster support engineers, regularly works with users new to MySQL Cluster, assisting them in installation, configuration, scaling, etc. In this lab, Santo will share best-practices in getting started. Delivering Breakthrough Performance with MySQL Cluster Saturday 29th, 1730-1830, in Golden Gate room 5. Frazer Clement, lead MySQL Cluster software engineer, will demonstrate how to translate the awesome Cluster benchmarks (remember 1 BILLION UPDATEs per minute ?!) into real-world performance. You can also get some best practices from our new MySQL Cluster performance guide  MySQL Cluster BoF Saturday 29th, 1900-2000, room Golden Gate 5.                                                                                                           Come and get a demonstration of new tools for the installation and configuration of MySQL Cluster, and spend time with the engineering team discussing any questions or issues you may have. Developing High-Throughput Services with NoSQL APIs to InnoDB and MySQL Cluster Sunday 30th, 1145 - 1245, in Golden Gate room 7.   In this session, JD Duncan and Andrew Morgan will present how to get started with both Memcached and new NoSQL APIs. JD and I recently ran a webinar demonstrating how to build simple Twitter-like services with Memcached and MySQL Cluster. The replay is available for download.  Case Studies: MySQL Cluster @ El Chavo, Latin America’s #1 Facebook Game Sunday 30th, 1745 - 1845, in Golden Gate room 4.                             Playful Play deployed MySQL Cluster CGE to power their market leading social game. This session will discuss the challenges they faced, why they selected MySQL Cluster and their experiences to date. You can read more about Playful Play and MySQL Cluster here  A Journey into NoSQLand: MySQL’s NoSQL Implementation Sunday 30th, 1345 - 1445, in Golden Gate room 4.                                          Lig Turmelle, web DBA at Kaplan Professional and esteemed Oracle Ace, will discuss her experiences working with the NoSQL interfaces for both MySQL Cluster and InnoDB Evaluating MySQL HA Alternatives Saturday 29th, 1430-1530, room Golden Gate 5                                                                                   Henrik Ingo, former member of the MySQL sales engineering team, will provide an overview of various HA technologies for MySQL, starting with replication, progressing to InnoDB, Galera and MySQL Cluster What about the other stuff? Of course MySQL Connect has much, much more than MySQL Cluster. There will be lots on replication (which I'll blog about soon), MySQL 5.6, InnoDB, cloud, etc, etc. Take a look at the full Content Catalog to see more. If you are attending, I hope to see you at one of the Cluster sessions...and remember, registration is still open

    Read the article

  • Conheça a nova Windows Azure

    - by Leniel Macaferi
    Hoje estamos lançando um grande conjunto de melhorias para a Windows Azure. A seguir está um breve resumo de apenas algumas destas melhorias: Novo Portal de Administração e Ferramentas de Linha de Comando O lançamento de hoje vem com um novo portal para a Windows Azure, o qual lhe permitirá gerenciar todos os recursos e serviços oferecidos na Windows Azure de uma forma perfeitamente integrada. O portal é muito rápido e fluido, suporta filtragem e classificação dos dados (o que o torna muito fácil de usar em implantações/instalações de grande porte), funciona em todos os navegadores, e oferece um monte de ótimos e novos recursos - incluindo suporte nativo à VM (máquina virtual), Web site, Storage (armazenamento), e monitoramento de Serviços hospedados na Nuvem. O novo portal é construído em cima de uma API de gerenciamento baseada no modelo REST dentro da Windows Azure - e tudo o que você pode fazer através do portal também pode ser feito através de programação acessando esta Web API. Também estamos lançando hoje ferramentas de linha de comando (que, igualmente ao portal, chamam as APIs de Gerenciamento REST) para tornar ainda ainda mais fácil a criação de scripts e a automatização de suas tarefas de administração. Estamos oferecendo para download um conjunto de ferramentas para o Powershell (Windows) e Bash (Mac e Linux). Como nossos SDKs, o código destas ferramentas está hospedado no GitHub sob uma licença Apache 2. Máquinas Virtuais ( Virtual Machines [ VM ] ) A Windows Azure agora suporta a capacidade de implantar e executar VMs duráveis/permanentes ??na nuvem. Você pode criar facilmente essas VMs usando uma nova Galeria de Imagens embutida no novo Portal da Windows Azure ou, alternativamente, você pode fazer o upload e executar suas próprias imagens VHD customizadas. Máquinas virtuais são duráveis ??(o que significa que qualquer coisa que você instalar dentro delas persistirá entre as reinicializações) e você pode usar qualquer sistema operacional nelas. Nossa galeria de imagens nativa inclui imagens do Windows Server (incluindo o novo Windows Server 2012 RC), bem como imagens do Linux (incluindo Ubuntu, CentOS, e as distribuições SUSE). Depois de criar uma instância de uma VM você pode facilmente usar o Terminal Server ou SSH para acessá-las a fim de configurar e personalizar a máquina virtual da maneira como você quiser (e, opcionalmente, capturar uma snapshot (cópia instantânea da imagem atual) para usar ao criar novas instâncias de VMs). Isto te proporciona a flexibilidade de executar praticamente qualquer carga de trabalho dentro da plataforma Windows Azure.   A novo Portal da Windows Azure fornece um rico conjunto de recursos para o gerenciamento de Máquinas Virtuais - incluindo a capacidade de monitorar e controlar a utilização dos recursos dentro delas.  Nosso novo suporte à Máquinas Virtuais também permite a capacidade de facilmente conectar múltiplos discos nas VMs (os quais você pode então montar e formatar como unidades de disco). Opcionalmente, você pode ativar o suporte à replicação geográfica (geo-replication) para estes discos - o que fará com que a Windows Azure continuamente replique o seu armazenamento em um data center secundário (criando um backup), localizado a pelo menos 640 quilômetros de distância do seu data-center principal. Nós usamos o mesmo formato VHD que é suportado com a virtualização do Windows hoje (o qual nós lançamos como uma especificação aberta), de modo a permitir que você facilmente migre cargas de trabalho existentes que você já tenha virtualizado na Windows Azure.  Também tornamos fácil fazer o download de VHDs da Windows Azure, o que também oferece a flexibilidade para facilmente migrar cargas de trabalho das VMs baseadas na nuvem para um ambiente local. Tudo o que você precisa fazer é baixar o arquivo VHD e inicializá-lo localmente - nenhuma etapa de importação/exportação é necessária. Web Sites A Windows Azure agora suporta a capacidade de rapidamente e facilmente implantar web-sites ASP.NET, Node.js e PHP em um ambiente na nuvem altamente escalável que te permite começar pequeno (e de maneira gratuita) de modo que você possa em seguida, adaptar/escalar sua aplicação de acordo com o crescimento do seu tráfego. Você pode criar um novo web site na Azure e tê-lo pronto para implantação em menos de 10 segundos: O novo Portal da Windows Azure oferece suporte integrado para a administração de Web sites, incluindo a capacidade de monitorar e acompanhar a utilização dos recursos em tempo real: Você pode fazer o deploy (implantação) para web-sites em segundos usando FTP, Git, TFS e Web Deploy. Também estamos lançando atualizações para as ferramentas do Visual Studio e da Web Matrix que permitem aos desenvolvedores uma fácil instalação das aplicações ASP.NET nesta nova oferta. O suporte de publicação do VS e da Web Matrix inclui a capacidade de implantar bancos de dados SQL como parte da implantação do site - bem como a capacidade de realizar a atualização incremental do esquema do banco de dados com uma implantação realizada posteriormente. Você pode integrar a publicação de aplicações web com o controle de código fonte ao selecionar os links "Set up TFS publishing" (Configurar publicação TFS) ou "Set up Git publishing" (Configurar publicação Git) que estão presentes no dashboard de um web-site: Ao fazer isso, você habilitará a integração com o nosso novo serviço online TFS (que permite um fluxo de trabalho do TFS completo - incluindo um build elástico e suporte a testes), ou você pode criar um repositório Git e referenciá-lo como um remote para executar implantações automáticas. Uma vez que você executar uma implantação usando TFS ou Git, a tab/guia de implantações/instalações irá acompanhar as implantações que você fizer, e permitirá que você selecione uma implantação mais antiga (ou mais recente) para que você possa rapidamente voltar o seu site para um estado anterior do seu código. Isso proporciona uma experiência de fluxo de trabalho muito poderosa.   A Windows Azure agora permite que você implante até 10 web-sites em um ambiente de hospedagem gratuito e compartilhado entre múltiplos usuários e bancos de dados (onde um site que você implantar será um dos vários sites rodando em um conjunto compartilhado de recursos do servidor). Isso te fornece uma maneira fácil para começar a desenvolver projetos sem nenhum custo envolvido. Você pode, opcionalmente, fazer o upgrade do seus sites para que os mesmos sejam executados em um "modo reservado" que os isola, de modo que você seja o único cliente dentro de uma máquina virtual: E você pode adaptar elasticamente a quantidade de recursos que os seus sites utilizam - o que te permite por exemplo aumentar a capacidade da sua instância reservada/particular de acordo com o aumento do seu tráfego: A Windows Azure controla automaticamente o balanceamento de carga do tráfego entre as instâncias das VMs, e você tem as mesmas opções de implantação super rápidas (FTP, Git, TFS e Web Deploy), independentemente de quantas instâncias reservadas você usar. Com a Windows Azure você paga por capacidade de processamento por hora - o que te permite dimensionar para cima e para baixo seus recursos para atender apenas o que você precisa. Serviços da Nuvem (Cloud Services) e Cache Distribuído (Distributed Caching) A Windows Azure também suporta a capacidade de construir serviços que rodam na nuvem que suportam ricas arquiteturas multicamadas, gerenciamento automatizado de aplicações, e que podem ser adaptados para implantações extremamente grandes. Anteriormente nós nos referíamos a esta capacidade como "serviços hospedados" - com o lançamento desta semana estamos agora rebatizando esta capacidade como "serviços da nuvem". Nós também estamos permitindo um monte de novos recursos com eles. Cache Distribuído Um dos novos recursos muito legais que estão sendo habilitados com os serviços da nuvem é uma nova capacidade de cache distribuído que te permite usar e configurar um cache distribuído de baixa latência, armazenado na memória (in-memory) dentro de suas aplicações. Esse cache é isolado para uso apenas por suas aplicações, e não possui limites de corte. Esse cache pode crescer e diminuir dinamicamente e elasticamente (sem que você tenha que reimplantar a sua aplicação ou fazer alterações no código), e suporta toda a riqueza da API do Servidor de Cache AppFabric (incluindo regiões, alta disponibilidade, notificações, cache local e muito mais). Além de suportar a API do Servidor de Cache AppFabric, esta nova capacidade de cache pode agora também suportar o protocolo Memcached - o que te permite apontar código escrito para o Memcached para o cache distribuído (sem que alterações de código sejam necessárias). O novo cache distribuído pode ser configurado para ser executado em uma de duas maneiras: 1) Utilizando uma abordagem de cache co-localizado (co-located). Nesta opção você aloca um percentual de memória dos seus roles web e worker existentes para que o mesmo seja usado ??pelo cache, e então o cache junta a memória em um grande cache distribuído.  Qualquer dado colocado no cache por uma instância do role pode ser acessado por outras instâncias do role em sua aplicação - independentemente de os dados cacheados estarem armazenados neste ou em outro role. O grande benefício da opção de cache "co-localizado" é que ele é gratuito (você não precisa pagar nada para ativá-lo) e ele te permite usar o que poderia ser de outra forma memória não utilizada dentro das VMs da sua aplicação. 2) Alternativamente, você pode adicionar "cache worker roles" no seu serviço na nuvem que são utilizados unicamente para o cache. Estes também serão unidos em um grande anel de cache distribuído que outros roles dentro da sua aplicação podem acessar. Você pode usar esses roles para cachear dezenas ou centenas de GBs de dados na memória de forma extramente eficaz - e o cache pode ser aumentado ou diminuído elasticamente durante o tempo de execução dentro da sua aplicação: Novos SDKs e Ferramentas de Suporte Nós atualizamos todos os SDKs (kits para desenvolvimento de software) da Windows Azure com o lançamento de hoje para incluir novos recursos e capacidades. Nossos SDKs estão agora disponíveis em vários idiomas, e todo o código fonte deles está publicado sob uma licença Apache 2 e é mantido em repositórios no GitHub. O SDK .NET para Azure tem em particular um monte de grandes melhorias com o lançamento de hoje, e agora inclui suporte para ferramentas, tanto para o VS 2010 quanto para o VS 2012 RC. Estamos agora também entregando downloads do SDK para Windows, Mac e Linux nos idiomas que são oferecidos em todos esses sistemas - de modo a permitir que os desenvolvedores possam criar aplicações Windows Azure usando qualquer sistema operacional durante o desenvolvimento. Muito, Muito Mais O resumo acima é apenas uma pequena lista de algumas das melhorias que estão sendo entregues de uma forma preliminar ou definitiva hoje - há muito mais incluído no lançamento de hoje. Dentre estas melhorias posso citar novas capacidades para Virtual Private Networking (Redes Privadas Virtuais), novo runtime do Service Bus e respectivas ferramentas de suporte, o preview público dos novos Azure Media Services, novos Data Centers, upgrade significante para o hardware de armazenamento e rede, SQL Reporting Services, novos recursos de Identidade, suporte para mais de 40 novos países e territórios, e muito, muito mais. Você pode aprender mais sobre a Windows Azure e se cadastrar para experimentá-la gratuitamente em http://windowsazure.com.  Você também pode assistir a uma apresentação ao vivo que estarei realizando às 1pm PDT (17:00Hs de Brasília), hoje 7 de Junho (hoje mais tarde), onde eu vou passar por todos os novos recursos. Estaremos abrindo as novas funcionalidades as quais me referi acima para uso público poucas horas após o término da apresentação. Nós estamos realmente animados para ver as grandes aplicações que você construirá com estes novos recursos. Espero que ajude, - Scott   Texto traduzido do post original por Leniel Macaferi.

    Read the article

  • Azure Grid Computing - Worker Roles as HPC Compute Nodes

    - by JoshReuben
    Overview ·        With HPC 2008 R2 SP1 You can add Azure worker roles as compute nodes in a local Windows HPC Server cluster. ·        The subscription for Windows Azure like any other Azure Service - charged for the time that the role instances are available, as well as for the compute and storage services that are used on the nodes. ·        Win-Win ? - Azure charges the computer hour cost (according to vm size) amortized over a month – so you save on purchasing compute node hardware. Microsoft wins because you need to purchase HPC to have a local head node for managing this compute cluster grid distributed in the cloud. ·        Blob storage is used to hold input & output files of each job. I can see how Parametric Sweep HPC jobs can be supported (where the same job is run multiple times on each node against different input units), but not MPI.NET (where different HPC Job instances function as coordinated agents and conduct master-slave inter-process communication), unless Azure is somehow tunneling MPI communication through inter-WorkerRole Azure Queues. ·        this is not the end of the story for Azure Grid Computing. If MS requires you to purchase a local HPC license (and administrate it), what's to stop a 3rd party from doing this and encapsulating exposing HPC WCF Broker Service to you for managing compute nodes? If MS doesn’t  provide head node as a service, someone else will! Process ·        requires creation of a worker node template that specifies a connection to an existing subscription for Windows Azure + an availability policy for the worker nodes. ·        After worker nodes are added to the cluster, you can start them, which provisions the Windows Azure role instances, and then bring them online to run HPC cluster jobs. ·        A Windows Azure worker role instance runs a HPC compatible Azure guest operating system which runs on the VMs that host your service. The guest operating system is updated monthly. You can choose to upgrade the guest OS for your service automatically each time an update is released - All role instances defined by your service will run on the guest operating system version that you specify. see Windows Azure Guest OS Releases and SDK Compatibility Matrix (http://go.microsoft.com/fwlink/?LinkId=190549). ·        use the hpcpack command to upload file packages and install files to run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Requirements ·        assuming you have an azure subscription account and the HPC head node installed and configured. ·        Install HPC Pack 2008 R2 SP 1 -  see Microsoft HPC Pack 2008 R2 Service Pack 1 Release Notes (http://go.microsoft.com/fwlink/?LinkID=202812). ·        Configure the head node to connect to the Internet - connectivity is provided by the connection of the head node to the enterprise network. You may need to configure a proxy client on the head node. Any cluster network topology (1-5) is supported). ·        Configure the firewall - allow outbound TCP traffic on the following ports: 80,       443, 5901, 5902, 7998, 7999 ·        Note: HPC Server  uses Admin Mode (Elevated Privileges) in Windows Azure to give the service administrator of the subscription the necessary privileges to initialize HPC cluster services on the worker nodes. ·        Obtain a Windows Azure subscription certificate - the Windows Azure subscription must be configured with a public subscription (API) certificate -a valid X.509 certificate with a key size of at least 2048 bits. Generate a self-sign certificate & upload a .cer file to the Windows Azure Portal Account page > Manage my API Certificates link. see Using the Windows Azure Service Management API (http://go.microsoft.com/fwlink/?LinkId=205526). ·        import the certificate with an associated private key on the HPC cluster head node - into the trusted root store of the local computer account. Obtain Windows Azure Connection Information for HPC Server ·        required for each worker node template ·        copy from azure portal - Get from: navigation pane > Hosted Services > Storage Accounts & CDN ·        Subscription ID - a 32-char hex string in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. In Properties pane. ·        Subscription certificate thumbprint - a 40-char hex string (you need to remove spaces). In Management Certificates > Properties pane. ·        Service name - the value of <ServiceName> configured in the public URL of the service (http://<ServiceName>.cloudapp.net). In Hosted Services > Properties pane. ·        Blob Storage account name - the value of <StorageAccountName> configured in the public URL of the account (http://<StorageAccountName>.blob.core.windows.net). In Storage Accounts > Properties pane. Import the Azure Subscription Certificate on the HPC Head Node ·        enable the services for Windows HPC Server  to authenticate properly with the Windows Azure subscription. ·        use the Certificates MMC snap-in to import the certificate to the Trusted Root Certification Authorities store of the local computer account. The certificate must be in PFX format (.pfx or .p12 file) with a private key that is protected by a password. ·        see Certificates (http://go.microsoft.com/fwlink/?LinkId=163918). ·        To open the certificates snapin: Run > mmc. File > Add/Remove Snap-in > certificates > Computer account > Local Computer ·        To import the certificate via wizard - Certificates > Trusted Root Certification Authorities > Certificates > All Tasks > Import ·        After the certificate is imported, it appears in the details pane in the Certificates snap-in. You can open the certificate to check its status. Configure a Proxy Client on the HPC Head Node ·        the following Windows HPC Server services must be able to communicate over the Internet (through the firewall) with the services for Windows Azure: HPCManagement, HPCScheduler, HPCBrokerWorker. ·        Create a Windows Azure Worker Node Template ·        Edit HPC node templates in HPC Node Template Editor. ·        Specify: 1) Windows Azure subscription connection info (unique service name) for adding a set of worker nodes to the cluster + 2)worker node availability policy – rules for deploying / removing worker role instances in Windows Azure o   HPC Cluster Manager > Configuration > Navigation Pane > Node Templates > Actions pane > New à Create Node Template Wizard or Edit à Node Template Editor o   Choose Node Template Type page - Windows Azure worker node template o   Specify Template Name page – template name & description o   Provide Connection Information page – Azure Subscription ID (text) & Subscription certificate (browse) o   Provide Service Information page - Azure service name + blob storage account name (optionally click Retrieve Connection Information to get list of available from azure – possible LRT). o   Configure Azure Availability Policy page - how Windows Azure worker nodes start / stop (online / offline the worker role instance -  add / remove) – manual / automatic o   for automatic - In the Configure Windows Azure Worker Availability Policy dialog -select days and hours for worker nodes to start / stop. ·        To validate the Windows Azure connection information, on the template's Connection Information tab > Validate connection information. ·        You can upload a file package to the storage account that is specified in the template - eg upload application or service files that will run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Add Azure Worker Nodes to the HPC Cluster ·        Use the Add Node Wizard – specify: 1) the worker node template, 2) The number of worker nodes   (within the quota of role instances in the azure subscription), and 3)           The VM size of the worker nodes : ExtraSmall, Small, Medium, Large, or ExtraLarge.  ·        to add worker nodes of different sizes, must run the Add Node Wizard separately for each size. ·        All worker nodes that are added to the cluster by using a specific worker node template define a set of worker nodes that will be deployed and managed together in Windows Azure when you start the nodes. This includes worker nodes that you add later by using the worker node template and, if you choose, worker nodes of different sizes. You cannot start, stop, or delete individual worker nodes. ·        To add Windows Azure worker nodes o   In HPC Cluster Manager: Node Management > Actions pane > Add Node à Add Node Wizard o   Select Deployment Method page - Add Azure Worker nodes o   Specify New Nodes page - select a worker node template, specify the number and size of the worker nodes ·        After you add worker nodes to the cluster, they are in the Not-Deployed state, and they have a health state of Unapproved. Before you can use the worker nodes to run jobs, you must start them and then bring them online. ·        Worker nodes are numbered consecutively in a naming series that begins with the root name AzureCN – this is non-configurable. Deploying Windows Azure Worker Nodes ·        To deploy the role instances in Windows Azure - start the worker nodes added to the HPC cluster and bring the nodes online so that they are available to run cluster jobs. This can be configured in the HPC Azure Worker Node Template – Azure Availability Policy -  to be automatic or manual. ·        The Start, Stop, and Delete actions take place on the set of worker nodes that are configured by a specific worker node template. You cannot perform one of these actions on a single worker node in a set. You also cannot perform a single action on two sets of worker nodes (specified by two different worker node templates). ·        ·          Starting a set of worker nodes deploys a set of worker role instances in Windows Azure, which can take some time to complete, depending on the number of worker nodes and the performance of Windows Azure. ·        To start worker nodes manually and bring them online o   In HPC Node Management > Navigation Pane > Nodes > List / Heat Map view - select one or more worker nodes. o   Actions pane > Start – in the Start Azure Worker Nodes dialog, select a node template. o   the state of the worker nodes changes from Not Deployed to track the provisioning progress – worker node Details Pane > Provisioning Log tab. o   If there were errors during the provisioning of one or more worker nodes, the state of those nodes is set to Unknown and the node health is set to Unapproved. To determine the reason for the failure, review the provisioning logs for the nodes. o   After a worker node starts successfully, the node state changes to Offline. To bring the nodes online, select the nodes that are in the Offline state > Bring Online. ·        Troubleshooting o   check node template. o   use telnet to test connectivity: telnet <ServiceName>.cloudapp.net 7999 o   check node status - Deployment status information appears in the service account information in the Windows Azure Portal - HPC queries this -  see  node status information for any failed nodes in HPC Node Management. ·        When role instances are deployed, file packages that were previously uploaded to the storage account using the hpcpack command are automatically installed. You can also upload file packages to storage after the worker nodes are started, and then manually install them on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). ·        to remove a set of role instances in Windows Azure - stop the nodes by using HPC Cluster Manager (apply the Stop action). This deletes the role instances from the service and changes the state of the worker nodes in the HPC cluster to Not Deployed. ·        Each time that you start a set of worker nodes, two proxy role instances (size Small) are configured in Windows Azure to facilitate communication between HPC Cluster Manager and the worker nodes. The proxy role instances are not listed in HPC Cluster Manager after the worker nodes are added. However, the instances appear in the Windows Azure Portal. The proxy role instances incur charges in Windows Azure along with the worker node instances, and they count toward the quota of role instances in the subscription.

    Read the article

< Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >