Search Results

Search found 57613 results on 2305 pages for 'puzzled late at nightwww developerit com'.

Page 1681/2305 | < Previous Page | 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688  | Next Page >

  • Does JAXP natively parse HTML?

    - by ikmac
    So, I whip up a quick test case in Java 7 to grab a couple of elements from random URIs, and see if the built-in parsing stuff will do what I need. Here's the basic setup (with exception handling etc omitted): DocumentBuilderFactory dbfac = DocumentBuilderFactory.newInstance(); DocumentBuilder dbuild = dbfac.newDocumentBuilder(); Document doc = dbuild.parse("uri-goes-here"); With no error handler installed, the parse method throws exceptions on fatal parse errors. When getting the standard Apache 2.2 directory index page from a local server: a SAXParseException with the message White spaces are required between publicId and systemId. The doctype looks ok to me, whitespace and all. When getting a page off a Drupal 7 generated site, it never finishes. The parse method seems to hang. No exceptions thrown, never returns. When getting http://www.oracle.com, a SAXParseException with the message The element type "meta" must be terminated by the matching end-tag "</meta>". So it would appear that the default setup I've used here doesn't handle HTML, only strictly written XML. My question is: can JAXP be used out-of-the-box from openJDK 7 to parse HTML from the wild (without insane gesticulations), or am I better off looking for an HTML 5 parser? PS this is for something I may not open-source, so licensing is also an issue :(

    Read the article

  • SQL SERVER – Changing Default Installation Path for SQL Server

    - by pinaldave
    Earlier I wrote a blog post about SQL SERVER – Move Database Files MDF and LDF to Another Location and in the blog post we discussed how we can change the location of the MDF and LDF files after database is already created. I had mentioned that we will discuss how to change the default location of the database. This way we do not have to change the location of the database after it is created at different locations. The ideal scenario would be to specify this default location of the database files when SQL Server Installation was performed. If you have already installed SQL Server there is an easy way to solve this problem. This will not impact any database created before the change, it will only affect the default location of the database created after the change. To change the default location of the SQL Server Installation follow the steps mentioned below: Go to Right Click on Servers >> Click on Properties >> Go to the Database Settings screen You can change the default location of the database files. All the future database created after the setting is changed will go to this new location. You can also do the same with T-SQL and here is the T-SQL code to do the same. USE [master] GO EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'DefaultData', REG_SZ, N'F:\DATA' GO EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'DefaultLog', REG_SZ, N'F:\DATA' GO What are the best practices do you follow with regards to default file location for your database? I am interested to know them. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Upcoming Webinar: Practical Performance Profiling presented by Jean-Philippe Gouigoux

    - by Michaela Murray
    Hot on the heels of releasing his new book, Practical Performance Profiling, I'm delighted that Jean-Philippe Gouigoux will be joining us on April 3rd to present a free webinar on optimizing .NET code performance. He gave me a sneak preview of his talk last week and there's a lot of really useful advice in there. He'll be discussing why he thinks 20% of performance problems account for 80% of lost time, before looking at some real examples of both server-side and client-side profiling, and covering a variety of best practices you can use to improve the performance of your own code. The webinar will be followed by a Q&A session where he'll be joined by Red Gate technical support engineer Chris Allen to answer any of your questions. Jean-Philippe has 10 years' experience in .NET, most recently as system architect at MGDIS, and was recently made a Microsoft MVP for his contributions to the .NET community. I'm really excited that he's found a gap between his day job and university lecturing to share his knowledge, and I hope you'll be able to join us on April 3rd - it's free but you do need to register in advance at https://www3.gotomeeting.com/register/829014934. I'll see you there!

    Read the article

  • Leverage Location Data with Maps in Your Apps

    - by stephen.garth
    Free Webinar: "Add Maps to Your Java Applications - The Easy Way" Wednesday May 26 at 9:00am Pacific Time Putting maps in your apps is a great way to put your apps on the map! Maps provide a location context that can trigger those "aha" moments leading to better business decisions. Tune into this free webinar to find out how easy it is to leverage spatial and location data, much of which is already in your Oracle Database. NAVTEQ's Dan Abugov and Oracle's Shay Shmeltzer combine their considerable experience with Oracle Spatial and Java application development to demonstrate how you can quickly and easily add maps to your Java applications, leveraging Oracle Spatial 11g, Oracle Fusion Middleware MapViewer, Oracle JDeveloper and ADF 11g. Register here. Learn more about Oracle spatial and location technology var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 31 (sys.dm_server_services)

    - by Tamarick Hill
    The last DMV for this month long blog session is the sys.dm_server_services DMV. This DMV returns information about your SQL Server, Full-Text, and SQL Server Agent related services. To further illustrate the information this DMV contains, lets run it against our Training instance that we have been using for this blog series. SELECT * FROM sys.dm_server_services The first column returned by this DMV is the actual Service Name. The next columns are the startup_type and startup_type_desc columns which display your chosen method for how a particular method should be started. The next columns status and status_desc display the current status for each of your Services on the instance. The process_id column represents the server process id. The last_startup_time column gives you the last time that a particular service was started. The service_account column provides you with the name of the account that is used to control the service. The filename column gives you the full path to the executable for the service. Lastly we have the is_clustered column and the cluster_nodename which indicates whether or not a particular service is clustered and is part of a resource cluster group, and if so, the cluster node that the service is installed on. This is a good DMV to provide you with a quick snapshot view of the current SQL Server services you have on your instance. For more information on this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/hh204542.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Can't adjust backlight on an Nvidia 335m GT

    - by Vladimir
    I have a laptop mySN QMG6 / Chiligreen Mobilitas NW which is Quanta TW9 barebone with intel i3 and nvidia 335m GT onboard. On ubuntu distros 10.04, 10.10, 11.04 and 11.10 i had problem with changing screen backlight with nouveau and nvidia drivers. FN+F4/F5 buttons did not change my brightness. I tried to edit xorg.conf, adding Option “RegistryDwords” “EnableBrightnessControl=1? Also tried to add some lines to grub acpi_osi="Linux" acpi_backlight=vendor Neither worked for me. Today I installed Ubuntu 12.04 beta2 and... With nouveau driver my FN key works, and changes the brightness (is it a new 3.0.22 linux kernel, or patched nouveau driver, i don't know). This is a big step forward. But, when installing proprietary nvidia driver (295.33) FN button stops working and i can't change brightness. I also tried workaround with xorg and grub with no result. Tried to install acpi from apt - no result. Is there anything left to try? I really need that nvidia driver working with FN keys, as i would like to have a working 3D acceleration. P.S. Does the nouveau driver has 3d acceleration like nvidia drivers??? If there is need to provide some log data, please write what should i print, as i'm a bit new to Ubuntu. P.P.S. Same problems i had with other Linux distros (Mint, Fedora and others) P.P.P.S. Other FN buttons work with both drivers (Mute, VOL UP/DOWN, WiFi on/off, Bluetooth, Sleep, Start/Pause, Stop, Next/Prev song) Some new thoughts... CONFIG_BACKLIGHT_GENERIC=m could this be an issue? Made this by grep BACKLIGHT /boot/config-3.2.0-22-generic-pae Full grep output can be viewed here: http://pastebin.com/sMRd2Z4k

    Read the article

  • Movement and Collision with AABB

    - by Jeremy Giberson
    I'm having a little difficulty figuring out the following scenarios. http://i.stack.imgur.com/8lM6i.png In scenario A, the moving entity has fallen to (and slightly into the floor). The current position represents the projected position that will occur if I apply the acceleration & velocity as usual without worrying about collision. The Next position, represents the corrected projection position after collision check. The resulting end position is the falling entity now rests ON the floor--that is, in a consistent state of collision by sharing it's bottom X axis with the floor's top X axis. My current update loop looks like the following: // figure out forces & accelerations and project an objects next position // check collision occurrence from current position -> projected position // if a collision occurs, adjust projection position Which seems to be working for the scenario of my object falling to the floor. However, the situation becomes sticky when trying to figure out scenario's B & C. In scenario B, I'm attempt to move along the floor on the X axis (player is pressing right direction button) additionally, gravity is pulling the object into the floor. The problem is, when the object attempts to move the collision detection code is going to recognize that the object is already colliding with the floor to begin with, and auto correct any movement back to where it was before. In scenario C, I'm attempting to jump off the floor. Again, because the object is already in a constant collision with the floor, when the collision routine checks to make sure moving from current position to projected position doesn't result in a collision, it will fail because at the beginning of the motion, the object is already colliding. How do you allow movement along the edge of an object? How do you allow movement away from an object you are already colliding with. Extra Info My collision routine is based on AABB sweeping test from an old gamasutra article, http://www.gamasutra.com/view/feature/3383/simple_intersection_tests_for_games.php?page=3 My bounding box implementation is based on top left/bottom right instead of midpoint/extents, so my min/max functions are adjusted. Otherwise, here is my bounding box class with collision routines: public class BoundingBox { public XYZ topLeft; public XYZ bottomRight; public BoundingBox(float x, float y, float z, float w, float h, float d) { topLeft = new XYZ(); bottomRight = new XYZ(); topLeft.x = x; topLeft.y = y; topLeft.z = z; bottomRight.x = x+w; bottomRight.y = y+h; bottomRight.z = z+d; } public BoundingBox(XYZ position, XYZ dimensions, boolean centered) { topLeft = new XYZ(); bottomRight = new XYZ(); topLeft.x = position.x; topLeft.y = position.y; topLeft.z = position.z; bottomRight.x = position.x + (centered ? dimensions.x/2 : dimensions.x); bottomRight.y = position.y + (centered ? dimensions.y/2 : dimensions.y); bottomRight.z = position.z + (centered ? dimensions.z/2 : dimensions.z); } /** * Check if a point lies inside a bounding box * @param box * @param point * @return */ public static boolean isPointInside(BoundingBox box, XYZ point) { if(box.topLeft.x <= point.x && point.x <= box.bottomRight.x && box.topLeft.y <= point.y && point.y <= box.bottomRight.y && box.topLeft.z <= point.z && point.z <= box.bottomRight.z) return true; return false; } /** * Check for overlap between two bounding boxes using separating axis theorem * if two boxes are separated on any axis, they cannot be overlapping * @param a * @param b * @return */ public static boolean isOverlapping(BoundingBox a, BoundingBox b) { XYZ dxyz = new XYZ(b.topLeft.x - a.topLeft.x, b.topLeft.y - a.topLeft.y, b.topLeft.z - a.topLeft.z); // if b - a is positive, a is first on the axis and we should use its extent // if b -a is negative, b is first on the axis and we should use its extent // check for x axis separation if ((dxyz.x >= 0 && a.bottomRight.x-a.topLeft.x < dxyz.x) // negative scale, reverse extent sum, flip equality ||(dxyz.x < 0 && b.topLeft.x-b.bottomRight.x > dxyz.x)) return false; // check for y axis separation if ((dxyz.y >= 0 && a.bottomRight.y-a.topLeft.y < dxyz.y) // negative scale, reverse extent sum, flip equality ||(dxyz.y < 0 && b.topLeft.y-b.bottomRight.y > dxyz.y)) return false; // check for z axis separation if ((dxyz.z >= 0 && a.bottomRight.z-a.topLeft.z < dxyz.z) // negative scale, reverse extent sum, flip equality ||(dxyz.z < 0 && b.topLeft.z-b.bottomRight.z > dxyz.z)) return false; // not separated on any axis, overlapping return true; } public static boolean isContactEdge(int xyzAxis, BoundingBox a, BoundingBox b) { switch(xyzAxis) { case XYZ.XCOORD: if(a.topLeft.x == b.bottomRight.x || a.bottomRight.x == b.topLeft.x) return true; return false; case XYZ.YCOORD: if(a.topLeft.y == b.bottomRight.y || a.bottomRight.y == b.topLeft.y) return true; return false; case XYZ.ZCOORD: if(a.topLeft.z == b.bottomRight.z || a.bottomRight.z == b.topLeft.z) return true; return false; } return false; } /** * Sweep test min extent value * @param box * @param xyzCoord * @return */ public static float min(BoundingBox box, int xyzCoord) { switch(xyzCoord) { case XYZ.XCOORD: return box.topLeft.x; case XYZ.YCOORD: return box.topLeft.y; case XYZ.ZCOORD: return box.topLeft.z; default: return 0f; } } /** * Sweep test max extent value * @param box * @param xyzCoord * @return */ public static float max(BoundingBox box, int xyzCoord) { switch(xyzCoord) { case XYZ.XCOORD: return box.bottomRight.x; case XYZ.YCOORD: return box.bottomRight.y; case XYZ.ZCOORD: return box.bottomRight.z; default: return 0f; } } /** * Test if bounding box A will overlap bounding box B at any point * when box A moves from position 0 to position 1 and box B moves from position 0 to position 1 * Note, sweep test assumes bounding boxes A and B's dimensions do not change * * @param a0 box a starting position * @param a1 box a ending position * @param b0 box b starting position * @param b1 box b ending position * @param aCollisionOut xyz of box a's position when/if a collision occurs * @param bCollisionOut xyz of box b's position when/if a collision occurs * @return */ public static boolean sweepTest(BoundingBox a0, BoundingBox a1, BoundingBox b0, BoundingBox b1, XYZ aCollisionOut, XYZ bCollisionOut) { // solve in reference to A XYZ va = new XYZ(a1.topLeft.x-a0.topLeft.x, a1.topLeft.y-a0.topLeft.y, a1.topLeft.z-a0.topLeft.z); XYZ vb = new XYZ(b1.topLeft.x-b0.topLeft.x, b1.topLeft.y-b0.topLeft.y, b1.topLeft.z-b0.topLeft.z); XYZ v = new XYZ(vb.x-va.x, vb.y-va.y, vb.z-va.z); // check for initial overlap if(BoundingBox.isOverlapping(a0, b0)) { // java pass by ref/value gotcha, have to modify value can't reassign it aCollisionOut.x = a0.topLeft.x; aCollisionOut.y = a0.topLeft.y; aCollisionOut.z = a0.topLeft.z; bCollisionOut.x = b0.topLeft.x; bCollisionOut.y = b0.topLeft.y; bCollisionOut.z = b0.topLeft.z; return true; } // overlap min/maxs XYZ u0 = new XYZ(); XYZ u1 = new XYZ(1,1,1); float t0, t1; // iterate axis and find overlaps times (x=0, y=1, z=2) for(int i = 0; i < 3; i++) { float aMax = max(a0, i); float aMin = min(a0, i); float bMax = max(b0, i); float bMin = min(b0, i); float vi = XYZ.getCoord(v, i); if(aMax < bMax && vi < 0) XYZ.setCoord(u0, i, (aMax-bMin)/vi); else if(bMax < aMin && vi > 0) XYZ.setCoord(u0, i, (aMin-bMax)/vi); if(bMax > aMin && vi < 0) XYZ.setCoord(u1, i, (aMin-bMax)/vi); else if(aMax > bMin && vi > 0) XYZ.setCoord(u1, i, (aMax-bMin)/vi); } // get times of collision t0 = Math.max(u0.x, Math.max(u0.y, u0.z)); t1 = Math.min(u1.x, Math.min(u1.y, u1.z)); // collision only occurs if t0 < t1 if(t0 <= t1 && t0 != 0) // not t0 because we already tested it! { // t0 is the normalized time of the collision // then the position of the bounding boxes would // be their original position + velocity*time aCollisionOut.x = a0.topLeft.x + va.x*t0; aCollisionOut.y = a0.topLeft.y + va.y*t0; aCollisionOut.z = a0.topLeft.z + va.z*t0; bCollisionOut.x = b0.topLeft.x + vb.x*t0; bCollisionOut.y = b0.topLeft.y + vb.y*t0; bCollisionOut.z = b0.topLeft.z + vb.z*t0; return true; } else return false; } }

    Read the article

  • SQLAuthority News – Presented Technical Session at DevReach 2013, Sofia, Bulgaria – Oct 1, 2013

    - by Pinal Dave
    Earlier this month, I had a fantastic time presenting at DevReach 2013, in Sofia, Bulgaria on Oct 1, 2013. DevReach strives to be the premier developer conference in Central and Eastern Europe. It is organized annually in Sofia, Bulgaria. The 8th edition of the conference is moving to a new and bigger venue: Sofia Event Center. In my career, I have presented over 9 different countries (India, USA, Canada, Singapore, Hong Kong, Malaysia, Sri Lanka, Nepal, Thailand), this was the first time for me to present in Europe. DevReach was perfect places to start my journey in Europe as an evangelist. The event was one of the most organized event I have ever come across in my life. The DevRech organization team had perfected every minute detail of the event to perfection. After the event was over I had the opportunity to see Sofia for one day. I presented with one of my most favorite Database Worst Practices Session. Pinal presenting at DevReach 2013, Sofia, Bulgaria DevReach 2013 DevReach 2013 DevReach 2013 Pinal presenting at DevReach 2013, Sofia, Bulgaria Pinal presenting at DevReach 2013, Sofia, Bulgaria Pinal Dave and Stephen Forte at Pluralsight Booth at DevReach 2013, Sofia, Bulgaria Pinal on City Tour of Sofia, Bulgaria Pinal on City Tour of Sofia, Bulgaria Pinal on City Tour of Sofia, Bulgaria Pinal on City Tour of Sofia, Bulgaria Pinal on City Tour of Sofia, Bulgaria Session Title: Secrets of SQL Server: Database Worst Practices Abstract: “Oh my God! What did I do?” Chances are you have heard, or even uttered, this expression. This demo-oriented session will show many examples where database professionals were dumbfounded by their own mistakes, and could even bring back memories of your own early DBA days. The goal of this session is to expose the small details that can be dangerous to the production environment and SQL Server as a whole, as well as talk about worst practices and how to avoid them. Shedding light on some of these perils and the tricks to avoid them may even save your current job. Thanks to Team Telerik for making this one of the best event in my life. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL

    Read the article

  • Now Live: New Java Enterprise Edition 6 Certification Exams

    - by Paul Sorensen
    The new Java Enterprise Edition 6 (EE6) exams are being released into production, effective today (February 21, 2011). If you participated in the beta exams, we appreciate your patience in awaiting your beta scores (there were some initial technical difficulties with these exams that delayed beta review and scoring). While most of the production exams are currently available and most of the beta scores have been mailed, they are not 100% completed. We expect all production exams to be available and all scores to be mailed tentatively by March 31, 2010. We appreciate your patience in receiving your beta scores as we work through some issues that have delayed the release of beta scores for two of these exams. Beta candidates can expect to receive their printed score reports in the mail from Prometric. Please allow 5 business days from the 'date mailed' below to receive your score report. If you have not received it within 5 business days, please contact Prometric. EXAM  PRODUCTIONDATE BETA SCOREMAILED Loading...CX-311-093 Java Platform, Enterprise Edition 6 Enterprise JavaBeans Developer Certified Expert ExamJanuary 12, 2011February 4, 2011CX-311-094 Java Platform, Enterprise Edition 6 Java Persistence API Developer Certified Expert ExamFebruary 1, 2011February 11, 2011CX-311-232 Java Platform, Enterprise Edition 6 Web Services Developer Certified Expert ExamFebruary 8, 2011by March 31, 2011tentative*CX-311-085 Java Platform, Enterprise Edition 6 JavaServer Pages and Servlet Developer Certified Expert Examby March 31, 2011tentative*by March 31, 2011tentative* *Dates are subject to change without notice.Register now at prometric.com/oracle.QUICK LINKSHelp with beta exam score reportOracle Certified Professional, Java Platform, Enterprise Edition 6 JavaServer Pages and Servlet DeveloperOracle Certified Professional, Java Platform, Enterprise Edition 6 Enterprise JavaBeans DeveloperOracle Certified Professional, Java Platform, Enterprise Edition 6 Java Persistence API DeveloperOracle Certified Professional, Java Platform, Enterprise Edition 6 Web Services Developer

    Read the article

  • OracleWebLogic YouTube Channel

    - by Jeffrey West
      The WebLogic Product Management Team has been working on content for an Oracle WebLogic YouTube channel to host demos and overview of WebLogic features.  The goal is to provide short educational overviews and demos of new, useful, or 'hidden gem' WLS features that may be underutilized.    We currently have 26 videos including: Coherence Server Lifecycle Management with WebLogic Server (James Bayer) WebLogic Server JRockit Mission Control Experimental Plugin (James Bayer) WebLogic Server Virtual Edition Overview and Deployment Oracle Virtual Assembly Builder (Mark Prichard) Migrating Applications from OC4J 10g to WebLogic Server with Smart Upgrade (Mark Prichard) WebLogic Server Java EE 6 Web Profile Demo (Steve Button) WebLogic Server with Maven and Eclipse (Steve Button) Advanced JMS Features: Store and Forward, Unit of Order and Unit of Work (Jeff West) WebLogic Scripting Tool (WLST) Recording, editing and Playback (Jeff West) Special thanks to Steve, Mark and James for creating quality content to help educate our community and promote WebLogic Server!  The Product Management Team will be making ongoing updates to the content.  We really do want people to give us feedback on what they want to see with regard to WebLogic.  Whether its how you achieve a certain architectural goal with WLS or a demonstration and sample code for a feature - All requests related to WLS are welcome! You can find the channel here: http://www.YouTube.com/OracleWebLogic.  Please comment on the Channel or our WebLogic Server blog to let us know what you think.  Thanks!

    Read the article

  • What You Said: Your Favorite Co-Op Games

    - by Jason Fitzpatrick
    While competitive gaming is fun, reader response to this week’s Ask the Readers question shows that good old beat-the-bad-guys-together cooperative gaming is as popular as ever. Read on to see what your fellow readers are playing. By far the most popular nomination for favorite co-op game was an outright classic: 1987′s smash hit Contra. Originally released as an arcade game, it was ported to the Nintendo Entertainment System in 1988. Contra was groundbreaking for the time as it featured simultaneous play for the two players–you and a friend could play side by side without waiting to take your turn. Clearly that kind of side-by-side play resonated with readers. RJ writes: When my fiance and I played and beat Contra on the NES. I knew she was the one and we got married and its been great. That’s no small feat; Contra was voted “Toughest Game to Beat” by IGN.com readers. Even readers who had moved on to newer games still recall Contra fondly; Jami writes: The Gears of War trilogy on 360 is my favorite co-op currently, although I do have fond memories of bonding with my brother playing some co-op Contra on the NES. HTG Explains: What is the Windows Page File and Should You Disable It? How To Get a Better Wireless Signal and Reduce Wireless Network Interference How To Troubleshoot Internet Connection Problems

    Read the article

  • Web Essentials extension now available for Visual Studio Express

    - by ihaynes
    Originally posted on: http://geekswithblogs.net/ihaynes/archive/2014/08/12/web-essentials-extension-now-available-for-visual-studio-express.aspxVisual Studio Express has always had one big issue for me, the inability to install any (or very few) of the normal Visual Studio extensions. One of the best extensions for front-end development and design is the Web Essentials extension by Mads Kristensen on the VS team. It was a hugely welcome surprise to find that this extension is now available, in full, for VS Express, and had been since May, darn it! It has a huge number of useful features, not least being able to minify HTML, CSS and JavaScript files, which is almost a prerequisite for responsive sites. It can also bundle them together. There are lots of other features for HTML editing; intellisense and syntax highlighting for robots.txt files; 'surround with tag', intellisense for meta tags (including iOS, Twitter and Facebook); generate vendor-specific CSS etc etc. The full details can be found at the dedicated site. http://vswebessentials.com This extension alone makes VS Express far more useful and worth having.

    Read the article

  • Visually stunning maps and PivotViewer

    - by Rob Farley
    One of the things about PivotViewer is that it runs in the Silverlight platform and can be extended recently. One of my guys at LobsterPot, Roger Noble, has used this to incorporate a Bing Maps layer, showing items which have  Latitude and Longitude values there. We’re already talking to a hospital about using this to allow them to browse their patient data, including showing the patients on a map according to which bed they’re in. Interesting times – this will involve having custom tiles instead of the ones from Bing Maps, but the idea is similar. Of course, we’ll be using Bing Maps to show where the patients live. I should also mention that this is a work-in-progress still. Figuring out how to use PivotViewer isn’t trivial, and we’ve done quite a lot of experimenting to see how to get things working. If you find bugs, please feel free to let me know (rob_farley at hotmail will usually reach me), and we’ll add them to our list. Here are some screenshots that I made recently using the collection at http://pivot.lobsterpot.com.au/flickr – by selecting a tag, you can get a new bunch of images. A couple of images that were taken in Iceland. Some from St Mary’s Lighthouse near Newcastle, UK. And some from around Big Ben in London. I’d recommend using either Firefox or Internet Explorer if you choose to browse this yourself. It seems the Chrome browser support for Silverlight doesn’t quite handle things as nicely as we’d all like. I imagine that at some point, we may enhance the Flickr collection, to be able to search on more than tags, but as a sample collection, it seems to work quite well.

    Read the article

  • ASP.NET developers turning to Visual WebGui for rich management system

    - by Webgui
    When The Center for Organ Recovery & Education (CORE) decided they needed a web application to allow easy access to the expenses management system they initially went to ASP.NET web forms combined with CSS. The outcome, however, was not satisfying enough as it appeared bland and lacked in richness. So in order to enrich the UI and give the web application some glitz, Visual WebGui was selected. Visual WebGui provided the needed richness and the familiar Windows look and feel also made the transition for the desktop users very easy. The richer GUI of Visual WebGui compared to ASP.NET conveyed some initial concerns about performance. But the Visual WebGui performance turned out to be a surprising advantage as the website maintained good response times. Working with Visual WebGui required a paradigm shift for the development process as some of the usual methods of coding with ASP.NET did not apply. However, the transition was fairly easy due to the simplicity and intuitiveness of Visual WebGui as well as the good support and documentation. “The shift into a different development paradigm was eased by the Visual WebGui web forums which are very active thanks to a large, involved community. There are also several video and web pages dedicated to answering the most commonly asked questions and pitfalls" Dave Bhatia, Systems Engineer who added "A couple of issues such as deploying on IIS7 seemed to be show stoppers at first, however the solution was readily available in a white paper on the Gizmox website.” The full story is found on the Visual WebGui website: http://www.visualwebgui.com/Gizmox/Resources/CaseStudies/tabid/358/articleType/ArticleView/articleId/964/The-Center-for-Organ-Recovery-Education-gets-a-web-based-expenses-management-system.aspx

    Read the article

  • #SQLMug - Like a collectors set of 5 x geeky SQL Mugs?

    - by Greg Low
    Hi Folks,For a while, I've been wanting to get some great SQL mugs printed for SQL Down Under but I need further inspiration so here's your chance to get a collectors set of 5 SQL mugs:Send me (greg @ sqldownunder . com) a great line to go onto the mugs, along with your country and a delivery address. I'll pick the best 5 and get mugs printed with those sayings. If you're one of the 5, I'll send you a collectors set with one of each of the 5. Simple enough?Here are some ideas I've already received to get you started:Chuck Norris gets NULL. Nothing compares to him either.ALTER MUG  SET SINGLE_USER  WITH ROLLBACK IMMEDIATE;DENY CONTROL  ON OBJECT::MUG  TO public;knock knock who's there? sp_ sp_who? spid 1, spid 2, spid 3, spid 4... ALTER DATABASE CriticalDB SET ChuckNorrisMode = ON WITH NOWAIT;I'll probably cut off new entries around the end of April.

    Read the article

  • Hudson.. another Continuous Integration tool

    - by Narendra Tiwari
    In my previous posts I discussed about Cruisecontrol.net and its legacy support to .Net development. Hudson  is yet another continuous integration tool. Hudson is also free like CCNet and built in java. - CCNet has its legacy support to .Net applications where as Hudson can be easily configured on both the environments (.Net and Java). - One of the major differences in CCNet and Hudson is the richer GUI of Hudson provide user interactive screens for project configuration where as in CCNet we have to play with a few xml configuration files. Both the tools are capable of providing basic features of continuous integration e.g.:- - Source Control configuration - Code Compilation/Build - Ad hoc plugin tools to be configured along with compilation Support for adhoc tools seems to be bigger with CCNet e.g. There are almost every source control plugin available with CCNet where as Hudson has support for limited source control servers. Basically there is an interseting point to see is that there are 2 major partsof whole CI system one performed by build tool and rest. Build tool takes care of all adhoc plugin tools  so no matter if CI tool does not have plugin for that tool if thet tools provides command line support that can be configured in build tool and that build tool is then configured with CI tool inturn. For example if I have a build script configured in MSBuild and CCNet can be easily switched to Hudson. Here we need not to change anything in build script we just need to configure MSBuild on Hudson and pass the path of script file and thats it... all is same. Hudson Resources:- - https://hudson.dev.java.net/ - http://wiki.hudson-ci.org/display/HUDSON/Meet+Hudson - http://wiki.hudson-ci.org/display/HUDSON/Plugins - http://callport.blogspot.com/2009/02/hudson-for-net-projects.html Java support on CCNet http://confluence.public.thoughtworks.org/display/CC/Getting+Started+With+CruiseControl?focusedCommentId=19988484#comment-19988484 Please share your thoughts...

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Ask the Readers: How Fast is Your Internet Connection?

    - by Mysticgeek
    The federal government recently announced a broadband initiative that calls for 260 million homes to have 100Mbps Internet connections by the year 2020. This got us wondering, how fast is your current Internet connection? Photo by roland When it comes to the speed of our Internet connection, we all want the maximum possible. The FCC recently announced their National Broadband Plan, which is an initiative to improve the Internet infrastructure in the United States and provide higher speeds to everyone. You’ve also undoubtedly heard the news about Google getting into the mix with their program to bring ultra high-speed fiber broadband to 50,000 users in select cities. While we wait for those programs to come into fruition, we thought it would be cool to check out what kinds of speeds you’re getting now. Test Your Internet Connection Speed There are several sites out there you can use to test your Internet speeds, but probably the best site is Speedtest.net. It’s easy to use, and allows you test download and upload speeds to and from various locations in the US and throughout the world. If you already know the speeds you’re getting leave a comment and let us know. If you use Speedtest.com, just keep in mind that our comment system won’t allow you to copy their result links, but you can simply tell us what you get in the results. We’re especially interested in the results of those of you who have Verizon FIOS or Comcast’s “Ultra” service. Leave a comment and join in the discussion! Similar Articles Productive Geek Tips Configure How often Ubuntu checks for Automatic UpdatesMysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPNorton Internet Security 2010 [Review]Disable Fast User Switching on Windows XPUnderstanding Vista’s New Network Connection Icons TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Converting Mp4 to Mp3 Easily Use Quick Translator to Translate Text in 50 Languages (Firefox) Get Better Windows Search With UltraSearch Scan News With NY Times Article Skimmer SpeedyFox Claims to Speed up your Firefox Beware Hover Kitties

    Read the article

  • Cloud-Burst 2012&ndash;Windows Azure Developer Conference in Sweden

    - by Alan Smith
    The Sweden Windows Azure Group (SWAG) will running “Cloud-Burst 2012”, a two-day Windows Azure conference hosted at the Microsoft offices in Akalla, near Stockholm on the 27th and 28th September, with an Azure Hands-on Labs Day at AddSkills on the 29th September. The event is free to attend, and will be featuring presentations on the latest Azure technologies from Microsoft MVPs and evangelists. The following presentations will be delivered on the Thursday (27th) and Friday (29th): · Connecting Devices to Windows Azure - Windows Azure Technical Evangelist Brady Gaster · Grid Computing with 256 Windows Azure Worker Roles - Connected System Developer MVP Alan Smith · ‘Warts and all’. The truth about Windows Azure development - BizTalk MVP Charles Young · Using Azure to Integrate Applications - BizTalk MVP Charles Young · Riding the Windows Azure Service Bus: Cross-‘Anything’ Messaging - Windows Azure MVP & Regional Director Christian Weyer · Windows Azure, Identity & Access - and you - Developer Security MVP Dominick Baier · Brewing Beer with Windows Azure - Windows Azure MVP Maarten Balliauw · Architectural patterns for the cloud - Windows Azure MVP Maarten Balliauw · Windows Azure Web Sites and the Power of Continuous Delivery - Windows Azure MVP Magnus Mårtensson · Advanced SQL Azure - Analyze and Optimize Performance - Windows Azure MVP Nuno Godinho · Architect your SQL Azure Databases - Windows Azure MVP Nuno Godinho   There will be a chance to get your hands on the latest Azure bits and an Azure trial account at the Hands-on Labs Day on Saturday (29th) with Brady Gaster, Magnus Mårtensson and Alan Smith there to provide guidance, and some informal and entertaining presentations. Attendance for the conference and Hands-on Labs Day is free, but please only register if you can make it, (and cancel if you cannot). Cloud-Burst 2012 event details and registration is here: http://www.azureug.se/CloudBurst2012/ Registration for Sweden Windows Azure Group Stockholm is here: swagmembership.eventbrite.com The event has been made possible by kind contributions from our sponsors, Knowit, AddSkills and Microsoft Sweden.

    Read the article

  • MCSE and MCSA makes a return to the world of certification..... but not as you know it.

    - by Testas
    Quick announcementMicrosoft Learning today announced the certification tracks for the upcoming SQL Server 2012 exams.You begin by acheiving the MCSA - Microsoft Certified Solutions Associate (Not to be confused by the old Microsoft Certified System Administrator)If you are starting out this includes taking the following three exams:Exam 70-461: Querying Microsoft SQL Server 2012Exam 70-462: Administering Microsoft SQL Server 2012 DatabasesExam 70-463: Implementing a Data Warehouse with Microsoft SQL Server 2012If you have an MCTS in SQL Server 2008 already you can take the following pathA pass in a SQL Server 2008 (MCTS) Microsoft Certified Technology Specialist examExam 70-457: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 1Exam 70-458: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 2Once you have achieved you MCSA status you can then start for your MCSE - Microsoft Certified Solutions Expert certificationYou have a choice, to do the MCSE: SQL Server 2012 Data Platform, MCSE: SQL Server 2012 Business Intelligence or you could do bothMCSE: SQL Server 2012 Data Platform involvesObtain your SQL Server 2012 MCSAExam 70-464: Developing Microsoft SQL Server 2012 DatabasesExam 70-465: Designing Database Solutions for Microsoft SQL Server 2012There is also an upgrade pathA pass in a SQL Server 2008 (MCITP) Microsoft Certified IT Professional Database Administrator or Database Developer CertificationExam 70-457: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 1Exam 70-458: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 2Exam 70-459: transisitioning your MCITP on SQL Server 2008 Database Administrator or Database Developer to MCSE:Data PlatformMCSE: SQL Server 2012 Business Intelligence involvesObtain your SQL Server 2012 MCSAExam 70-466: Implementing Data Models and Reports with Microsoft SQL Server 2012Exam 70-467: Designing Business Intelligence Solutions with Microsoft SQL Server 2012The upgrade path involves:A pass in a SQL Server 2008 (MCITP) Microsoft Certified IT Professional Business Intelligence CertificationExam 70-457: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 1Exam 70-458: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 2Exam 70-460: transisitioning your MCITP on SQL Server 2008 Business Intelligence Developer to MCSE:Business IntelligenceAs a result if you want to achieve the MCSE in either Data Platform or Business Intelligence and you are starting from scratch there will be 5 exams to takeIf you have the ability to upgrade your certification because you have an MCITP already then it will be three examsFull details and questions can be found at http://www.microsoft.com/learning/en/us/certification/cert-sql-server.aspxThanksChris

    Read the article

  • Windows expand over 2 monitors in quad-monitor setup

    - by Martin
    i just installed ubuntu 11.10 with my previous hardware setup: 4 monitors and 2 identical nvidia graphic cards. draging windows around all 4 monitors works nice, but when i maximize a window it expand always over 2 screens. (2x twinview). i had an workaround for this in 11.04 but cant remember what it was... may one of you guys have quad monitors up and running with window maximizing on only one screen my xorg.conf looks like this: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 280.13 (buildd@allspice) Thu Aug 11 20:54:45 UTC 2011 # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 280.13 ([email protected]) Wed Jul 27 17:15:58 PDT 2011 Section "ServerLayout" # Removed Option "Xinerama" "1" # Removed Option "Xinerama" "0" Identifier "Layout0" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" 0 1080 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "1" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Samsung SMB2220N" HorizSync 31.0 - 80.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Monitor" Identifier "Monitor1" VendorName "Unknown" ModelName "Samsung SMB2220N" HorizSync 31.0 - 80.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 550 Ti" BusID "PCI:2:0:0" EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 550 Ti" BusID "PCI:3:0:0" EndSection Section "Screen" # Removed Option "TwinView" "True" # Removed Option "MetaModes" "nvidia-auto-select, nvidia-auto-select" # Removed Option "metamodes" "CRT-0: nvidia-auto-select 1920x1080 +0+0, CRT-1: nvidia-auto-select 1920x1080 +1920+0" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "1" Option "TwinViewXineramaInfoOrder" "CRT-0" Option "metamodes" "CRT-0: nvidia-auto-select +0+0, CRT-1: nvidia-auto-select +1920+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" # Removed Option "TwinView" "True" # Removed Option "MetaModes" "nvidia-auto-select, nvidia-auto-select" # Removed Option "metamodes" "CRT-0: nvidia-auto-select 1920x1080 +0+0, CRT-1: nvidia-auto-select 1920x1080 +1920+0" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "TwinView" "1" Option "TwinViewXineramaInfoOrder" "CRT-0" Option "metamodes" "CRT-0: nvidia-auto-select +0+0, CRT-1: nvidia-auto-select +1920+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Extensions" Option "Composite" "Disable" EndSection

    Read the article

  • New ACS Resell Portfolio for OPN Members

    - by rituchhibber
    Oracle Advanced Customer Support (ACS) Services is pleased to announce availability of the ACS Resell Portfolio to Oracle PartnerNetwork (OPN) members on June 28, 2012. The ACS Resell Portfolio is available to Gold level OPN members and above selling to end users with valid Oracle Premier Support/End User agreements, and in countries where ACS has a local in-country presence to support the partner business. ACS provides mission critical support services for complex IT environments to help maximize performance, achieve higher availability, and reduce risk. The ACS Resell Portfolio can be leveraged to reduce time to market and drive improved end user satisfaction. Including ACS services at point of license sale can maximize your success as an Oracle partner.       On July 10, 2012, Oracle ACS is hosting a 60-minute resell portfolio training session. Topics include: ACS Resell Portfolio objectives   Partner participation requirements ACS portfolio services enabled for partner resell ACS sales engagement and transaction processes Contracting requirements Attend the following session to hear how you can maximize your profit opportunities by including ACS services, which compliment your solutions with integrated Oracle advanced support technologies.      DIAL-IN INFORMATION Webconference July 10, 2012 4:00 PM CEST Webconference Session Number: 591 988 820 Session Password: ebh12345 International: 706.501.7506 US: 866.589.6202 Call ID: 95867658 Click here for a list of toll-free international numbers. Please contact Support-Partner-Questions_ww_grp@oracle.com with any questions or visit the ACS website.

    Read the article

  • C# wpf helix scale based mesh parenting using Transform3DGroup

    - by Rick2047
    I am using https://helixtoolkit.codeplex.com/ as a 3D framework. I want to move black mesh relative to the green mesh as shown in the attached image below. I want to make green mesh parent to the black mesh as the change in scale of the green mesh also will result in motion of the black mesh. It could be partial parenting or may be more. I need 3D rotation and 3D transition + transition along green mesh's length axis for the black mesh relative to the green mesh itself. Suppose a variable green_mesh_scale causing scale for the green mesh along its length axis. The black mesh will use that variable in order to move along green mesh's length axis. How to go about it. I've done as follows: GeometryModel3D GreenMesh, BlackMesh; ... double green_mesh_scale = e.NewValue; Transform3DGroup forGreen = new Transform3DGroup(); Transform3DGroup forBlack = new Transform3DGroup(); forGreen.Children.Add(new ScaleTransform3D(new Vector3D(1, green_mesh_scale , 1))); // ... transforms for rotation n transition GreenMesh.Transform = forGreen ; forBlack = forGreen; forBlack.Children.Add(new TranslateTransform3D(new Vector3D(0, green_mesh_scale, 0))); BlackMesh.Transform = forBlack; The problem with this is the scale transform will also be applied to the black mesh. I think i just need to avoid the scale part. I tried keeping all the transforms but scale, on another Transform3DGroup variable but that also not behaving as expected. Can MatrixTransform3D be used here some how? Also please suggest if this question can be posted somewhere else in stackexchange.

    Read the article

  • hp pavilion g6 1250 wireless problems

    - by Ahmed Kotb
    i have tried using ubuntu 10.04 and ubuntu 11.10 and both have the same problem the driver is detected by the additional propriety drivers wizard and after installation , ubuntu can't see except on wireless network which is not mine (and i can't connect to it as it is secured) there are plenty of wireless networks around me but ubuntu can't detect them and if i tried to connect to one of them as if it was hidden connection time out. the command lspci -nvn | grep -i net gives 04:00.0 Network controller [0280]: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller [14e4:4727] (rev 01) 05:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller [10ec:8136] (rev 05) iwconfig gives lo no wireless extensions. eth0 no wireless extensions. wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=19 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off i guess it is something related to Broadcom driver .. but i don't know , any help will be appreciated UPDATE: ok i installed a new copy of 11.10 to remove the effect of any trials i have made i followed the link (http://askubuntu.com/q/67806) as suggested all what i have done now is trying the command lsmod | grep brc and it gave me the following brcmsmac 631693 0 brcmutil 17837 1 brcmsmac mac80211 310872 1 brcmsmac cfg80211 199587 2 brcmsmac,mac80211 crc_ccitt 12667 1 brcmsmac then i blacklisted all the other drivers as mentioned in the link the wireless is still disabled.. in the last installation installing the Brodcom STA driver form the additional drivers enabled the menu but as i have said before it wasn't able to connect or even get a list of available networks so what should i do now ? the output of command rfkill list all rfkill list all 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no

    Read the article

  • Focus on Oracle Data Profiling and Data Quality 11g - 24/Fev/11

    - by Claudia Costa
    Thursday 24th February, 11am GMTOracle offers an integrated suite Data Quality software architected to discover and correct today's data quality problems and establish a platform prepared for tomorrow's yet unknown data challenges.Oracle Data Profiling provides data investigation, discovery, and profiling in support of quality, migration, integration, stewardship, and governance initiatives. It includes a broad range of features that expand upon basic profiling, including automated monitoring, business-rule validation, and trend analysis.Oracle Data Quality for Data Integrator provides cleansing, standardization, matching, address validation, location enrichment, and linking functions for global customer data and operational business data.It ensures that data adheres to established standards that are adaptable to fit each organization's specific needs. Both single - and double - byte data are processed in local languages to provide a unique and centralized view of customers, products and services.  During this in-person briefing, Data Integration Solution Specialists will be providing a technical overview and a walkthrough.Agenda Oracle Data Integration Strategy overview A focus on Oracle Data Profiling and Oracle Data Quality for Data Integrator: Oracle Data Profiling Oracle Data Quality for Data Integrator Live demo Q&A  This FREE online LIVE eSeminar will be delivered over the Web and Conference Call. Registrations received less than 24hours prior to start time may not receive confirmation to attend.To register click here.For any questions please contact [email protected]

    Read the article

< Previous Page | 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688  | Next Page >