Search Results

Search found 9083 results on 364 pages for 'startup scripts'.

Page 271/364 | < Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >

  • What does your Python development workbench look like?

    - by Fabian Fagerholm
    First, a scene-setter to this question: Several questions on this site have to do with selection and comparison of Python IDEs. (The top one currently is What IDE to use for Python). In the answers you can see that many Python programmers use simple text editors, many use sophisticated text editors, and many use a variety of what I would call "actual" integrated development environments – a single program in which all development is done: managing project files, interfacing with a version control system, writing code, refactoring code, making build configurations, writing and executing tests, "drawing" GUIs, and so on. Through its GUI, an IDE supports different kinds of workflows to accomplish different tasks during the journey of writing a program or making changes to an existing one. The exact features vary, but a good IDE has sensible workflows and automates things to let the programmer concentrate on the creative parts of writing software. The non-IDE way of writing large programs relies on a collection of tools that are typically single-purpose; they do "one thing well" as per the Unix philosophy. This "non-integrated development environment" can be thought of as a workbench, supported by the OS and generic interaction through a text or graphical shell. The programmer creates workflows in their mind (or in a wiki?), automates parts and builds a personal workbench, often gradually and as experience accumulates. The learning curve is often steeper than with an IDE, but those who have taken the time to do this can often claim deeper understanding of their tools. (Whether they are better programmers is not part of this question.) With advanced editor-platforms like Emacs, the pieces can be integrated into a whole, while with simpler editors like gedit or TextMate, the shell/terminal is typically the "command center" to drive the workbench. Sometimes people extend an existing IDE to suit their needs. What does your Python development workbench look like? What workflows have you developed and how do they work? For the first question, please give the main "driving" program – the one that you use to control the rest (Emacs, shell, etc.) the "small tools" -- the programs you reach for when doing different tasks For the second question, please describe what the goal of the workflow is (eg. "set up a new project" or "doing initial code design" or "adding a feature" or "executing tests") what steps are in the workflow and what commands you run for each step (eg. in the shell or in Emacs) Also, please describe the context of your work: do you write small one-off scripts, do you do web development (with what framework?), do you write data-munching applications (what kind of data and for what purpose), do you do scientific computing, desktop apps, or something else? Note: A good answer addresses the perspectives above – it doesn't just list a bunch of tools. It will typically be a long answer, not a short one, and will take some thinking to produce; maybe even observing yourself working.

    Read the article

  • Eclipse Crashes on Ubuntu 11.10

    - by Adrian Matteo
    I'm using Eclipse Indigo with aptana, to develope a rails application and it was working fine, but now it keeps crashing on startup. It opens and when the loading bars appear on the status bar, it goes gray (not responding) and the in closes without an error. Here is the output from the terminal when I ran it from there: (Eclipse:7391): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (Eclipse:7391): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (Eclipse:7391): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (Eclipse:7391): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", 2012-05-27 16:05:58.272::INFO: Logging to STDERR via org.mortbay.log.StdErrLog 2012-05-27 16:06:00.586::INFO: jetty-6.1.11 2012-05-27 16:06:00.743::INFO: Started [email protected]:8500 2012-05-27 16:06:00.744::INFO: Started [email protected]:8600 2012-05-27 16:06:01.999::INFO: jetty-6.1.11 2012-05-27 16:06:01.029::INFO: Opened /tmp/jetty_preview_server.log 2012-05-27 16:06:01.046::INFO: Started [email protected]:8000 2012-05-27 16:06:01.071::INFO: jetty-6.1.11 2012-05-27 16:06:01.016::INFO: Started [email protected]:8300 ** (Eclipse:7391): DEBUG: NP_Initialize ** (Eclipse:7391): DEBUG: NP_Initialize succeeded No bp log location saved, using default. [000:000] Browser XEmbed support present: 1 [000:000] Browser toolkit is Gtk2. [000:001] Using Gtk2 toolkit ERROR: Invalid browser function table. Some functionality may be restricted. [000:056] Warning(optionsfile.cc:47): Load: Could not open file, err=2 [000:056] No bp log location saved, using default. [000:056] Browser XEmbed support present: 1 [000:056] Browser toolkit is Gtk2. [000:056] Using Gtk2 toolkit ** (Eclipse:7391): DEBUG: NP_Initialize ** (Eclipse:7391): DEBUG: NP_Initialize succeeded ** (Eclipse:7391): DEBUG: NP_Initialize ** (Eclipse:7391): DEBUG: NP_Initialize succeeded ** (Eclipse:7391): DEBUG: NP_Initialize ** (Eclipse:7391): DEBUG: NP_Initialize succeeded java version "1.6.0_23" OpenJDK Runtime Environment (IcedTea6 1.11pre) (6b23~pre11-0ubuntu1.11.10.2) OpenJDK 64-Bit Server VM (build 20.0-b11, mixed mode) java.io.FileNotFoundException: /home/amatteo/.eclipse/org.eclipse.platform_3.7.0_155965261/configuration/portal.1.2.7.024747/aptana/favicon.ico (No such file or directory) at java.io.FileInputStream.open(Native Method) at java.io.FileInputStream.<init>(FileInputStream.java:120) at com.aptana.ide.server.jetty.ResourceBaseServlet.doGet(ResourceBaseServlet.java:136) at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:487) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:362) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:729) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:324) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:505) at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:829) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:513) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:211) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:380) at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:488) 2012-05-27 16:06:03.277::WARN: /favicon.ico: java.io.IOException: /home/amatteo/.eclipse/org.eclipse.platform_3.7.0_155965261/configuration/portal.1.2.7.024747/aptana/favicon.ico (No such file or directory) It was working perfectly till a few days ago!

    Read the article

  • Ghost team foundation build controllers

    - by Martin Hinshelwood
    Quite often after an upgrade there are things left over. Most of the time they are easy to delete, but sometimes it takes a little effort. Even rarer are those times when something just will not go away no matter how much you try. We have had a ghost team build controller hanging around for a while now, and it had defeated my best efforts to get rid of it. The build controller was from our old TFS server from before our TFS 2010 beta 2 upgrade and was really starting to annoy me. Every time I try to delete it I get the message: Controller cannot be deleted because there are build in progress -Manage Build Controller dialog   Figure: Deleting a ghost controller does not always work. I ended up checking all of our 172 Team Projects for the build that was queued, but did not find anything. Jim Lamb pointed me to the “tbl_BuildQueue” table in the team Project Collection database and sure enough there was the nasty little beggar. Figure: The ghost build was easily spotted Adam Cogan asked me: “Why did you suspect this one?” Well, there are a number of things that led me to suspect it: QueueId is very low: Look at the other items, they are in the thousands not single digits ControllerId: I know there is only one legitimate controller, and I am assuming that 6 relates to “zzUnicorn” DefinitionId: This is a very low number and I looked it up in “tbl_BuildDefinition” and it did not exist QueueTime: As we did not upgrade to TFS 2010 until late 2009 a date of 2008 for a queued build is very suspect Status: A status of 2 means that it is still queued This build must have been queued long ago when we were using TFS 2008, probably a beta, and it never got cleaned up. As controllers are new in TFS 2010 it would have created the “zzUnicorn” controller to handle any build servers that already exist. I had previously deleted the Agent, but leaving the controller just looks untidy. Now that the ghost build has been identified there are two options: Delete the row I would not recommend ever deleting anything from the database to achieve something in TFS. It is really not supported. Set the Status to cancelled (Recommended) This is the best option as TFS will then clean it up itself So I set the Status of this build to 2 (cancelled) and sure enough it disappeared after a couple of minutes and I was then able to then delete the “zzUnicorn” controller. Figure: Almost completely clean Now all I have to do is get rid of that untidy “zzBunyip” agent, but that will require rewriting one of our build scripts which will have to wait for now.   Technorati Tags: ALM,TFBS,TFS 2010

    Read the article

  • How to recreate spfile on Exadata?

    - by Bandari Huang
    Copy spfile from the ASM diskgroup to local disk by using the ASMCMD command line tool.  ASMCMD> pwd +DATA_DM01/EDWBASE ASMCMD> ls -l Type Redund Striped Time Sys Name Y CONTROLFILE/ Y DATAFILE/ Y ONLINELOG/ Y PARAMETERFILE/ Y TEMPFILE/ N spfileedwbase.ora => +DATA_DM01/EDWBASE/PARAMETERFILE/spfile.355.800017117 ASMCMD> cp +DATA_DM01/EDWBASE/spfileedwbase.ora /home/oracle/spfileedwbase.ora.bak Copy the context from spfileedwbase.ora.bak to initedwbase.ora except garbled character. Using above initedwbase.ora, start one of the RAC instances to the mount phase.   SQL> startup mount pfile=/home/oracle/initedwbase.ora Ensure one of the database instances is mounted before attempting to recreate the spfile.  SQL> select INSTANCE_NAME,HOST_NAME,STATUS from v$instance; INSTANCE_NAME HOST_NAME  STATUS ------------- ---------  ------ edwbase1      dm01db01   MOUNTED Create the new spfile. SQL> create spfile='+DATA_DM01/EDWBASE/spfileedwbase.ora' from pfile='/home/oracle/initedwbase.ora'; ASMCMD will show that a new spfile has been created as the alias spfilerac2.ora is now pointing to a new spfile under the PARAMETER directory in ASM. ASMCMD> pwd +DATA_DM01/EDWBASE ASMCMD> ls -l Type Redund Striped Time Sys Name Y CONTROLFILE/ Y DATAFILE/ Y ONLINELOG/ Y PARAMETERFILE/ Y TEMPFILE/ N spfilerac2.ora => +DATA_DM01/EDWBASE/PARAMETERFILE/spfile.356.800013581  Shutdown the instance and restart the database using srvctl using the newly created spfile. SQL> shutdown immediate ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> exit [oracle@dm01db01 ~]$ srvctl start database -d edwbase [oracle@dm01db01 ~]$ srvctl status database -d edwbase Instance edwbase1 is running on node dm01db01 Instance edwbase2 is running on node dm01db02 ASMCMD will now show a number of spfiles exist in the PARAMETERFILE directory for this database. The spfile containing the parameter preventing startups should be removed from ASM. In this case the file spfile.355.800017117 can be removed because spfile.356.800013581 is the current spfile. ASMCMD> pwd +DATA_DM01/EDWBASE ASMCMD> cd PARAMETERFILE ASMCMD> ls -l Type Redund Striped Time Sys Name PARAMETERFILE UNPROT COARSE FEB 19 08:00:00 Y spfile.355.800017117 PARAMETERFILE UNPROT COARSE FEB 19 08:00:00 Y spfile.356.800013581 ASMCMD> rm spfile.355.800017117 ASMCMD> ls spfile.356.800013581 Referenece: Recreating the Spfile for RAC Instances Where the Spfile is Stored in ASM [ID 554120.1]

    Read the article

  • Silverlight Cream for May 22, 2010 -- #867

    - by Dave Campbell
    In this Issue: Michael Washington, Xianzhong Zhu, Jim Lynn, Laurent Bugnion, and Kyle McClellan. A ton of Shoutouts this time: Cigdem Patlak (CrocusGirl) is interviewed about Silverlight 4 on Channel 9: Silverlight discussion with Cigdem Patlak Timmy Kokke has material up from a presentation he did, and check out the SilverAmp project he's got going: Code & Slides – SDE – What’s new in Silverlight 4 Graham Odds at ScottLogic has an interesting post up: Contextual cues in user interface design Einar Ingebrigtsen is discussing Balder licensing and is asking for input: Balder - Licensing SilverLaw has updated two of his stylings at the Expression Gallery to Silverlight 4: ChildWindow and Accordion Styling Silverlight 4 Keep this page bookmarked -- it's the only page you'll need for Silverlight and Expression links.. well, that and my blog :) .. from Adam Kinney: Silverlight and Expression Blend Jeremy Boyd and John-Daniel Trask have some sweet-looking controls in their new release: Introducing Silverlight Elements 1.1 Matthias Shapiro entered the Design for America competition with his Recovery Review: A Silverlight Sunlight Foundation Visualization Project be sure to check out his blog post about it -- there's a link at the bottom. Koen Zwikstra announed a new release: Document Toolkit 2 Beta 1 available ... built for SL4 and lots of features -- check out the blog post. From SilverlightCream.com: Simple Example To Secure WCF Data Service OData Methods Michael Washington has a follow-on tutorial up on WCF Data Security with OData -- essentially this is the 'securing the data' part ... the Silverlight part was in the previous post... all code is available. Developing Freecell Game Using Silverlight 3 Part 1 Xianzhong Zhu has the first of a two-part tutorial up on building Freecell in Silverlight 3 ... yeah... SL3 -- oh, can you say WP7?? :) Silverlight Top Tip: Startup page for Navigation Apps Jim Lynn has detailed how to go straight to a specific page you're working on in a complex Silverlight app say for debug purposes rather than page/page/page ... I was just thinking yesterday about putting a shortcut on my taskbar for something similar in .NET :) Handling DataGrid.SelectedItems in an MVVM-friendly manner Laurent Bugnion responded with code to a question about getting a DataGrid's SelectedItems into the ViewModel in MVVMLight. Demo code available too. RIA Services and Windows Live ID Kyle McClellan has a post up discussing using LiveID and RIA Services and Silverlight. Lots of external links sprinkled around. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Macbook Pro 13" Retina (10,2): Keyboard and Touchpad don't work correcltly

    - by Dirk
    I'm dealing with Ubuntu since about 5 years and installed it on several laptops. Now I'm stuck when trying to install Ubuntu 12.04.1 on a brand new Macbook Pro 13" Retina (10,2). I sucessfully can start Ubuntu from an USB stick, the Ubuntu desktop is visible, a mouse cursor is visible. But there is no respond to keyboard or touchpad input. So I cannot really install Ubuntu on the Macbook. The details of my approach: Prepare an empty USB stick Download "ISO 2 USB EFI Booter for Mac" and copy the file bootX64.efi to the USB drive as /efi/boot/bootX64.efi. Download Ubuntu 12.04.1 Desktop for Mac from http://cdimage.ubuntu.com/releases/1-amd64+mac.iso and copy the iso the USB drive as /efi/boot/boot.iso Put the USB stick into the Macbook Press and hold the "alt" button while switching the Macbook on Select "EFI Boot" from the boot menu that appears and press the Return / Enter key Immediately a black terminal screen appears with the headline "Welcome to the Ubuntu ISO << - EFI booter". 30 seconds later the familiar Ubuntu startup graphics screen is showing. Further 20 seconds later Ubuntu has started and the desktop is visible - in wonderfully fine resolution Now the computer does not respond to any actions on the touchpad nor the keyboard Who did install Ubuntu on this Macbook Pro 13" Retina (10,2) successfully? On this site https://help.ubuntu.com/community/MacBookPro this unit is not listed yet, anyway. Any help would be greatly appreciated! Dirk PS: I could now install ubuntu with an external USB Keyboard/Mouse Set. But now, after showing the grub menu, a kernel panic error appears and booting stops :-/ Seems that the ubuntu images fit not to a macbook pro retina 13" (10,2) yet. PPS: Ok, there are new facts: If I edit the boot options and enter " nomodeset noapic" ubuntu starts and Keyboard and Touchpad work! Now I have to enable WiFi... PPPS: After installing Broadcom firmware from USB Live stick as described in other posts, WiFi was enabled. Then I could update ubuntu normally to 12.10. After this, I must not enter "nomodeset noapic" in the grub menu anymore. Last Thing now is the Touchpad. The driver seems not to be there. The touch pad is only showing as mouse. t.b.c.

    Read the article

  • Silverlight Cream for April 26, 2010 -- #848

    - by Dave Campbell
    In this Issue: Viktor Larsson, Mike Snow(-2-), Jeff Brand, Marlon Grech(-2-, -3-), Jonathan van de Veen, Phil Middlemiss. Shoutout: Justin Angel wants everyone to know he is Joining the Vertigo Team!... congratulations, Justin! From SilverlightCream.com: Learning Silverlight – Advanced Color Animations Viktor Larsson is demonstrating small pieces of Silverlight he's picked upon in the course of his work project. This first one is on ColorAnimations using KeyFrames Silverlight Tip of the Day #4 – Enabling Out of Browser Applications Mike Snow has Tip #4 up and it's all about OOB... from what you have to do to what your user sees, including how to check to see if you're running OOB... source project included. Silverlight Tip of the Day #5 – Debugging Out of Browser Applications Following a fine tradition he started with his first series, Mike Snow is putting out more than one Tip per day :) ... Number 5 is up and is all about debugging OOB apps. Simplifying Page Transitions in Windows Phone 7 Silverlight Applications Jeff Brand has a WP7 post up discussing Page Transitions. He first discusses the most common brute-force method, then moves into the TransitioningContentControl from the Toolkit. An introduction to MEFedMVVM – PART 1 Marlon Grech, Peter O’Hanlon, and Glenn Block worked together to produce an MEF and MVVM library that works for WPF and Silverlight and allows Design-time goodness and a loosely-coupled bridge between the View and ViewModel ... and it's on CodePlex ... they're also looking for comments/additions, so check it out. Leveraging MEFedMVVM ExportViewModel – MEFedMVVM Part 2 In Part 2, Marlon Grech demonstrates using MEFedMVVM and shows off some of the basics such as Importing services, Design-Time data and DataContextAware ViewModels IContextAware services to bridge the gap between the View and the ViewModel – MEFedMVVM Part 3 Marlon Grech's 3rd post about MEFedMVVM is about IContextAwareService -- bridging the gap betwen the View and ViewModel -- a service that knows about it's context. Building a Web Setup that configures your Silverlight application Jonathan van de Veen has a post up at SilverlightShow on using a Web Setup Project to configure your Silverlight when things startup... if you're not familiar with doing this... take note! A Chrome and Glass Theme - Part 4 Phil Middlemiss has part 4 of his great tutorial series up on creating a theme in Expression Blend ... this time tackling the listbox. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • ODI 11g – How to Load Using Partition Exchange

    - by David Allan
    Here we will look at how to load large volumes of data efficiently into the Oracle database using a mixture of CTAS and partition exchange loading. The example we will leverage was posted by Mark Rittman a couple of years back on Interval Partitioning, you can find that posting here. The best thing about ODI is that you can encapsulate all those ‘how to’ blog posts and scripts into templates that can be reused – the templates are of course Knowledge Modules. The interface design to mimic Mark's posting is shown below; The IKM I have constructed performs a simple series of steps to perform a CTAS to create the stage table to use in the exchange, then lock the partition (to ensure it exists, it will be created if it doesn’t) then exchange the partition in the target table. You can find the IKM Oracle PEL.xml file here. The IKM performs the follows steps and is meant to illustrate what can be done; So when you use the IKM in an interface you configure the options for hints (for parallelism levels etc), initial extent size, next extent size and the partition variable;   The KM has an option where the name of the partition can be passed in, so if you know the name of the partition then set the variable to the name, if you have interval partitioning you probably don’t know the name, so you can use the FOR clause. In my example I set the variable to use the date value of the source data FOR (TO_DATE(''01-FEB-2010'',''dd-MON-yyyy'')) Using a variable lets me invoke the scenario many times loading different partitions of the same target table. Below you can see where this is defined within ODI, I had to double single-quote the strings since this is placed inside the execute immediate tasks in the KM; Note also this example interface uses the LKM Oracle to Oracle (datapump), so this illustration uses a lot of the high performing Oracle database capabilities – it uses Data Pump to unload, then a CreateTableAsSelect (CTAS) is executed on the external table based on top of the Data Pump export. This table is then exchanged in the target. The IKM and illustrations above are using ODI 11.1.1.6 which was needed to get around some bugs in earlier releases with how the variable is handled...as far as I remember.

    Read the article

  • Oracle Open World starts on Sunday, Sept 30

    - by Mike Dietrich
    Oracle Open World 2012 starts on Sunday this week - and we are really looking forward to see you in one of our presentations, especially theDatabase Upgrade on SteriodsReal Speed, Real Customers, Real Secretson Monday, Oct 1, 12:15pm in Moscone South 307(just skip the lunch - the boxed food is not healthy at all): Monday, Oct 1, 12:15 PM - 1:15 PM - Moscone South - 307 Database Upgrade on Steroids:Real Speed, Real Customers, Real Secrets Mike Dietrich - Consulting Member Technical Staff, Oracle Georg Winkens - Technical Manager, Amadeus Data Processing Carol Tagliaferri - Senior Development Manager, Oracle  Looking to improve the performance of your database upgrade and learn about other ways to reduce upgrade time? Isn’t everyone? In this session, you will learn directly from Oracle’s Upgrade Development team about what you can do to speed things up. Find out about ways to reduce upgrade downtime such as using a transient logical standby database and/or Oracle GoldenGate, and get other hints and tips. Learn about new features that improve upgrade performance and reduce downtime. Hear Georg Winkens, DB Services technical manager from Amadeus, speak about his upgrade experience, and get real-life performance measurements and advice for a successful upgrade. . And don't forget: we already start on Sunday so if you'd like to learn about the SAP database upgrades at Deutsche Messe: Sunday, Sep 30, 11:15 AM - 12:00 PM - Moscone West - 2001Oracle Database Upgrade to 11g Release 2 with SAP Applications Andreas Ellerhoff - DBA, Deutsche Messe AG Mike Dietrich - Consulting Member Technical Staff, Oracle Jan Klokkers - Sr.Director SAP Development, Oracle Deutsche Messe began to use Oracle6 Database at the end of the 1980s and has been using Oracle Database technology together with SAP applications successfully since 2002. At the end of 2010, it took the first steps of an upgrade to Oracle Database 11g Release 2 (11.2), and since mid-2011, all SAP production systems there run successfully with Oracle Database 11g. This presentation explains why Deutsche Messe uses Oracle Database together with SAP applications, discusses the many reasons for the upgrade to Release 11g, and focuses on the operational top aspects from a DBA perspective. . And unfortunately the Hands-On-Lab is sold out already ... We would like to apologize but we have absolutely ZERO influence on either the number of runs or the number of available seats.  Tuesday, Oct 2, 10:15 AM - 12:45 PM - Marriott Marquis - Salon 12/13 Hands On Lab:Upgrading an Oracle Database Instance, Using Best Practices Roy Swonger - Senior Director, Software Development, Oracle Carol Tagliaferri - Senior Development Manager, Oracle Mike Dietrich - Consulting Member Technical Staff, Oracle Cindy Lim - PMTS, Oracle Carol Palmer - Principal Product Manager, Oracle This hands-on lab gives participants the opportunity to work through a database upgrade from an older release of Oracle Database to the very latest Oracle Database release available. Participants will learn how the improved automation of the upgrade process and the generation of fix-up scripts can quickly help fix database issues prior to upgrading. The lab also uses the new parallel upgrade feature to improve performance of the upgrade, resulting in less downtime. Come get inside information about database upgrades from the Database Upgrade development team. . See you soon

    Read the article

  • Speed Up the Help Dialog in Windows and Office

    - by Matthew Guay
    When you click help, you don’t want to wait for your computer to bring it to you.  Here’s how you can speed up the help dialog in Windows and Office. If you have a slow internet connection, chances are you’ve been frustrated by the Help dialog in Windows and Office trying to download fresh content every time you open them. This can be great if the updated help files contain better content, but sometimes you just want to find what you were looking for without waiting.  Here’s how you can turn off the automatic online help. Use Local Help in Windows Windows 7 and Vista’s help dialog usually tries to load the latest content from the net, but this can take a long time on slow connections. If you’re seeing the above screen a lot, you may want to switch to offline help.  Click the “Online Help” button at the bottom, and select “Get offline Help”. Now your computer will just load the pre-installed help files.  And don’t worry; if there’s a major update to your help files, Windows will download and install it through Windows Update.   Stupid Geek Tip: An easy way to open Windows Help is to click on your desktop or Start Menu and press F1 on your keyboard. Use Local Help in Office This same trick works in Office 2007 and 2010.  We’ve actually had more problems with Office’s help being tardy. Solve this the same way as with Windows help.  Click on the “Connected to Office.com” or “Connected to Office Online” button, depending on your version of Office, and select “Show content only from this computer”. This will automatically change the settings for Help in all of your Office applications. While this may not be a major trick, it can be helpful especially if you have a slow internet connection and want to get things done quickly.  Similar Articles Productive Geek Tips How to See the About Dialog and Version Information in Office 2007Speed Up SATA Hard Drives in Windows VistaMake Mouse Navigation Faster in WindowsSpeed up Your Windows Vista Computer with ReadyBoostSet the Speed Dial as the Opera Startup Page TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos

    Read the article

  • Summit Old, Summit New, Summit Borrowed...

    - by Rob Farley
    PASS Summit is coming up, and I thought I’d post a few things. Summit Old... At the PASS Summit, you will get the chance to hear presentations by the SQL Server establishment. Just about every big name in the SQL Server world is a regular at the PASS Summit, so you will get to hear and meet people like Kalen Delaney (@sqlqueen) (who just recently got awarded MVP status for the 20th year running), and from all around the world such as the UK’s Chris Webb (@technitrain) or Pinal Dave (@pinaldave) from India. Almost all the household names in SQL Server will be there, including a large contingent from Microsoft. The PASS Summit is by far the best place to meet the legends of SQL Server. And they’re not all old. Some are, but most of them are younger than you might think. ...Summit New... The hottest topics are often about the newest technologies (such as SQL Server 2012). But you will almost certainly learn new stuff about older versions too. But that’s not what I wanted to pick on for this point. There are many new speakers at every PASS Summit, and content that has not been covered in other places. This year, for example, LobsterPot’s Roger Noble (@roger_noble) is giving a presentation for the first time. He’s a regular around the Australian circuit, but this is his first time presenting to a US audience. New Zealand’s Paul White (@sql_kiwi) is attending his first PASS Summit, and will be giving over four hours of incredibly deep stuff that has never been presented anywhere in the US before (I can’t say the world, because he did present similar material in Adelaide earlier in the year). ...Summit Borrowed... No, I’m not talking about plagiarism – the talks you’ll hear are all their own work. But you will get a lot of stuff you’ll be able to take back and apply at work. The PASS Summit sessions are not full of sales-pitches, telling you about how great things could be if only you’d buy some third-party vendor product. It’s simply not that kind of conference, and PASS doesn’t allow that kind of talk to take place. Instead, you’ll be taught techniques, and be able to download scripts and slides to let you perform that magic back at work when you get home. You will definitely find plenty of ideas to borrow at the PASS Summit. ...Summit Blue Yeah – and there’s karaoke. Blue - Jason - SQL Karaoke - YouTube

    Read the article

  • Best development architecture for a small team of programmers ( WAMP Stack )

    - by Tio
    Hi all.. I'm in the first month of work in a new company.. and after I met the two programmer's and asked how things are organized in terms of projects inside the company, they simply shrug their shoulders, and said that nothing is organized.. I think my jaw hit the ground that same time.. ( I know some, of you think I should quit, but I'm on a privileged position, I'm the most experienced there, so there's room for me to grow inside the company, and I'm taking the high road ).. So I talked to the IT guy, and one of the programmers, and maybe this week I'm going to get a server all to myself to start organizing things. I've used various architectures in my previous work experiences, on one I was developing in a server on the network ( no source control of course ).. another experience I had was developing in my local computer, with no server on the network, just source control. And at home, I have a mix of the two, everything I code is on a server on the network, and I have those folders under source control, and I also have a no-ip account configured on that server so I can access it everywhere and I can show the clients anything. For me I think this last solution ( the one I have at home ) is the best: Network server with WAMP stack. The server as a public IP so we can access it by domain name. And use subdomains for each project. Everybody works directly on the network server. I think the problem arises, when two or more people want to work on the same project, in this case the only way to do this is by using source control and local repositories, this is great, but I think this turns development a lot more complicated. In the example I gave, to make a change to the code, I would simply need to open the file in my favorite editor, make the change, alter the database, check in the changes into source control and presto all done. Using local repositories, I would have to get the latest version, run the scripts on the local database to update it, alter the file, alter the database, check in the changes to the network server, update the database on the network server, see if everything is running well on the network server, and presto all done, to me this seems overcomplicated for a change on a simple php page. I could share the database for the local development and for the network server, that sure would help. Maybe the best way to do this is just simply: Network server with WAMP stack ( test server so to speak ), public server accessible trough the web. LAMP stack on every developer computer ( minus the database ) We develop locally, test, then check in the changes into the server test and presto. What do you think? Maybe I should start doing this at home.. Thanks and best regards... Edit: I'm sorry I made a mistake and switched WAMP with LAMP, sorry about that..

    Read the article

  • DHCP server with multiple interfaces on ubuntu, destroys default gateway

    - by Henrik Alstad
    I use Ubuntu, and I have many interfaces. eth0, which is my internet connection, and it gets its info from a DHCP-server totally outisde of my control. I then have eth1,eth2,eth3 and eth4 which I have created a DHCP-server for.(ISC DHCP-Server) It seems to work, and I even get an IP-address from the foreign DHCP-server on the internet facing interface. However, for some reason it seems my gateway for eth0 became screwed after I installed my local DHCP-server for eth1-eth4. (I think so because I got an IP for eth0, and I can ping other stuff on the local network, but I cannot get access to the internet). My eth0-specific info in /etc/network/interfaces: auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp auto eth1 iface eth1 inet static address 10.0.1.1 netmask 255.255.255.0 network 10.0.1.0 broadcast 10.0.1.255 gateway 10.0.1.1 mtu 8192 auto eth2 iface eth2 inet static address 10.0.2.1 netmask 255.255.255.0 network 10.0.2.0 broadcast 10.0.2.255 gateway 10.0.2.1 mtu 8192 My /etc/default/isc-dhcp-server: INTERFACES="eth1 eth2 eth3 eth4" So why does my local DHCP-server fuck up the gateway for eth0, when I tell it not to listen to eth0? Anyone see the problem or what I can do to fix it? The problem seems indeed to be the gateways. "netstat -nr" gives: 0.0.0.0 --- 10.X.X.X ---- 0.0.0.0 --- UG 0 0 0 eth3 It should have been 0.0.0.0 129.2XX.X.X 0.0.0.0 UG 0 0 0 eth0 So for some reason, my local DHCP-server overrides the gateway I get from the network DHCP. Edit: dhcp.conf looks like this(I included info only for eth1 subnet): ddns-update-style none; not authoritative; subnet 10.0.1.0 netmask 255.255.255.0 { interface eth1; option domain-name "example.org"; option domain-name-servers ns1.example.org, ns2.example.org; default-lease-time 600; max-lease-time 7200; range 10.0.1.10 10.0.1.100; host camera1_1 { hardware ethernet 00:30:53:11:24:6E; fixed-address 10.0.1.10; } host camera2_1 { hardware ethernet 00:30:53:10:16:70; fixed-address 10.0.1.11; } } Also, it seems that the gateway is correctly set if I run "/etc/init.d/networking restart" in a terminal, but that's not helpful for me, I need the correct gateway to be set during startup, and i'd rather find the source of the problem

    Read the article

  • Sony VAIO with Insyde H2O EFI bios will not boot into GRUB EFI

    - by Rohan Dhruva
    I bought a new Sony Vaio S series laptop. It uses Insyde H2O BIOS EFI, and trying to install Linux on it is driving me crazy. root@kubuntu:~# parted /dev/sda print Model: ATA Hitachi HTS72756 (scsi) Disk /dev/sda: 640GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 274MB 273MB fat32 EFI system partition hidden 2 274MB 20.8GB 20.6GB ntfs Basic data partition hidden, diag 3 20.8GB 21.1GB 273MB fat32 EFI system partition boot 4 21.1GB 21.3GB 134MB Microsoft reserved partition msftres 5 21.3GB 342GB 320GB ntfs Basic data partition 6 342GB 358GB 16.1GB ext4 Basic data partition 7 358GB 374GB 16.1GB ntfs Basic data partition 8 374GB 640GB 266GB ntfs Basic data partition What is surprising is that there are 2 EFI system partitions on the disk. The sda2 partition is a 20gb recovery partition which loads windows with a basic recovery interface. This is accessible by pressing the "ASSIST" button as opposed to the normal power button. I presume that the sda1 EFI System Partition (ESP) loads into this recovery. The sda3 ESP has more fleshed out entries for Microsoft Windows, which actually goes into Windows 7 (as confirmed by bcdedit.exe on Windows). Ubuntu is installed on sda6, and while installation I chose sda3 as my boot partition. The installer correctly created a sda3/EFI/ubuntu/grubx64.efi application. The real problem: for the life of me, I can't set it to be the default! I tried creating a sda3/startup.nsh which called grubx64.efi, but it didn't help -- on rebooting, the system still boots into windows. I tried using efibootmgr, and that shows as it it worked: root@kubuntu:~# efibootmgr BootCurrent: 0000 BootOrder: 0000,0001 Boot0000* EFI USB Device Boot0001* Windows Boot Manager root@kubuntu:~# efibootmgr --create --gpt --disk /dev/sda --part 3 --write-signature --label "GRUB2" --loader "\\EFI\\ubuntu\\grubx64.efi" BootCurrent: 0000 BootOrder: 0002,0000,0001 Boot0000* EFI USB Device Boot0001* Windows Boot Manager Boot0002* GRUB2 root@kubuntu:~# efibootmgr BootCurrent: 0000 BootOrder: 0002,0000,0001 Boot0000* EFI USB Device Boot0001* Windows Boot Manager Boot0002* GRUB2 However, on rebooting, as you guessed, the machine rebooted directly back into Windows. The only things I can think of are: The sda1 partition is somehow being used Overwrite /EFI/Boot/bootx64.efi and /EFI/Microsoft/Boot/bootmgfw.efi with grubx64.efi [but this seems really radical]. Can anyone please help me out? Thanks -- any help is greatly appreciated, as this issue is driving me crazy!

    Read the article

  • Cross-platform independent development

    - by Joe Wreschnig
    Some years ago, if you wrote in C and some subset of C++ and used a sufficient number of platform abstractions (via SDL or whatever), you could run on every platform an indie could get on - Linux, Windows, Mac OS of various versions, obscure stuff like BeOS, and the open consoles like the GP2X and post-death Dreamcast. If you got a contract for a closed platform at some point, you could port your game to that platform with "minimal" code changes as well. Today, indie developers must use XNA to get on the Xbox 360 (and upcoming Windows phone); must not use XNA to work anywhere else but Windows; until recently had to use Java on Android; Flash doesn't run on phones, HTML5 doesn't work on IE. Unlike e.g. DirectX vs. OpenGL or Windows vs. Unix, these are changes to the core language you write your code in and can't be papered over without, basically, writing a compiler. You can move some game logic into scripts and include an interpreter - except when you can't, because the iPhone SDK doesn't allow it, and performance suffers because no one allows JIT. So what can you do if you want a really cross-platform portable game, or even just a significant body of engine and logic code? Is this not a problem because the platforms have fundamentally diverged - it's just plain not worthwhile to try to target both an iPhone and the Xbox 360 with any shared code because such a game would be bad? (I find this very unlikely. I can easily see wanting to share a game between a Windows Mobile phone and an Android, or an Xbox 360 and an iPad.) Are interfaces so high-level now that porting time is negligible? (I might believe this for business applications, but not for games with strict performance requirements.) Is this going to become more pronounced in the future? Is the split going to be, somewhat scarily, still down vendor lines? Will we all rely on high-level middleware like Flash or Unity to get anything cross-platform done? tl;dr - Is porting a problem, is it going to be a bigger problem in the future, and if so how do we solve it?

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Depencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. That being said though - I serialized 10,000 objects in 80ms vs. 45ms so this isn't hardly slouchy. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?On occasion dynamic loading makes sense. But there's a price to be paid in added code complexity and a performance hit. But for some operations that are not pivotal to a component or application and only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful tool. Hopefully some of you find this information useful…© Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Is inline SQL still classed as bad practice now that we have Micro ORMs?

    - by Grofit
    This is a bit of an open ended question but I wanted some opinions, as I grew up in a world where inline SQL scripts were the norm, then we were all made very aware of SQL injection based issues, and how fragile the sql was when doing string manipulations all over the place. Then came the dawn of the ORM where you were explaining the query to the ORM and letting it generate its own SQL, which in a lot of cases was not optimal but was safe and easy. Another good thing about ORMs or database abstraction layers were that the SQL was generated with its database engine in mind, so I could use Hibernate/Nhibernate with MSSQL, MYSQL and my code never changed it was just a configuration detail. Now fast forward to current day, where Micro ORMs seem to be winning over more developers I was wondering why we have seemingly taken a U-Turn on the whole in-line sql subject. I must admit I do like the idea of no ORM config files and being able to write my query in a more optimal manner but it feels like I am opening myself back up to the old vulnerabilities such as SQL injection and I am also tying myself to one database engine so if I want my software to support multiple database engines I would need to do some more string hackery which seems to then start to make code unreadable and more fragile. (Just before someone mentions it I know you can use parameter based arguments with most micro orms which offers protection in most cases from sql injection) So what are peoples opinions on this sort of thing? I am using Dapper as my Micro ORM in this instance and NHibernate as my regular ORM in this scenario, however most in each field are quite similar. What I term as inline sql is SQL strings within source code. There used to be design debates over SQL strings in source code detracting from the fundamental intent of the logic, which is why statically typed linq style queries became so popular its still just 1 language, but with lets say C# and Sql in one page you have 2 languages intermingled in your raw source code now. Just to clarify, the SQL injection is just one of the known issues with using sql strings, I already mention you can stop this from happening with parameter based queries, however I highlight other issues with having SQL queries ingrained in your source code, such as the lack of DB Vendor abstraction as well as losing any level of compile time error capturing on string based queries, these are all issues which we managed to side step with the dawn of ORMs with their higher level querying functionality, such as HQL or LINQ etc (not all of the issues but most of them). So I am less focused on the individual highlighted issues and more the bigger picture of is it now becoming more acceptable to have SQL strings directly in your source code again, as most Micro ORMs use this mechanism. Here is a similar question which has a few different view points, although is more about the inline sql without the micro orm context: http://stackoverflow.com/questions/5303746/is-inline-sql-hard-coding

    Read the article

  • SQL SERVER – Iridium I/O – SQL Server Deduplication that Shrinks Databases and Improves Performance

    - by Pinal Dave
    Database performance is a common problem for SQL Server DBA’s.  It seems like we spend more time on performance than just about anything else.  In many cases, we use scripts or tools that point out performance bottlenecks but we don’t have any way to fix them.  For example, what do you do when you need to speed up a query that is already tuned as well as possible?  Or what do you do when you aren’t allowed to make changes for a database supporting a purchased application? Iridium I/O for SQL Server was originally built at Confio software (makers of Ignite) because DBA’s kept asking for a way to actually fix performance instead of just pointing out performance problems. The technology is certified by Microsoft and was so promising that it was spun out into a separate company that is now run by the Confio Founder/CEO and technology management team. Iridium uses deduplication technology to both shrink the databases as well as boost IO performance.  It is intriguing to see it work.  It will deduplicate a live database as it is running transactions.  You can watch the database get smaller while user queries are running. Iridium is a simple tool to use. After installing the software, you click an “Analyze” button which will spend a minute or two on each database and estimate both your storage and performance savings.  Next, you click an “Activate” button to turn on Iridium I/O for your selected databases.  You don’t need to reboot the operating system or restart the database during any part of the process. As part of my test, I also wanted to see if there would be an impact on my databases when Iridium was removed.  The ‘revert’ process (bringing the files back to their SQL Server native format) was executed by a simple click of a button, and completed while the databases were available for normal processing. I was impressed and enjoyed playing with the software and encourage all of you to try it out.  Here is the link to the website to download Iridium for free. . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Database Developer - October 2013 issue: Download Database 12c and related products

    - by Javier Puerta
    The October issue of the Database Application Developer  newsletter is now available. The focus of this issue is on downloads of Database 12c and related products. (Full newsletter here) Get Ready to Download, Deploy and Develop for Oracle Database 12c This month we're focused on downloads. We've rounded up the top developer releases (both early adopter and BETA releases) and the articles that will help you do more with Oracle 12c. See the technical content that will help you get started. If you're ready...Away we go! — Laura Ramsey, Database and Developer Community, Oracle Technology Network Team FEATURED DOWNLOADS Download: Oracle Database 12c According Tom Kyte, the Oracle 12c version has some of the biggest enhancements to the core database since version 6 - Check it out for yourself. Download: Oracle SQL Developer 4.0 Early Adopter 2 is Here Oracle SQL Developer is a free IDE that simplifies the development and management of Oracle Database. It is a complete end-to-end development platform for your PL/SQL applications that features a worksheet for running queries and scripts, a DBA console for managing the database, a reports interface, a complete data modeling solution and a migration platform for moving your 3rd party databases to Oracle.  If you are interested in checking out this new early adopter version,Oracle SQL Developer 4.0 EA is the place to go. Download: Oracle 12c Multitenant Self Provisioning Application -BETA- The -BETA- is here. The Multitenant self provisioning Application is an easy and productive way for DBAs and Developers to get familiar with powerful PDB features including create, clone, plug and unplug.   No better time to start playing with PDBs. Oracle 12c Multitenant Self Provisioning Application. Download: New! Updates to Oracle Data Integration Portfolio Oracle GoldenGate 12c and Oracle Data Integrator 12c is now available. From Real-Time data integration, transactional change data capture, data replication, transformations....to hi-volume, high-performance batch loads, event-driven, trickle-feed integration process..its now available. Go here all the details and links to downloads...and Congratulations Data Integration Team!. Download: Oracle VM Templates for Oracle 12c Features Support for Single Instance, Oracle Restart and Oracle RAC Support for all current Oracle Database 11.2 versions as well as Oracle 12c on Oracle Linux 5 Update 9 & Oracle Linux 6 Update 4. The Oracle 12c templates allow end-to-end automation for Flex Cluster, Flex ASM and PDBs. See how the Deploycluster tool was updated to support Single Instance and the new Oracle 12c features. Oracle VM Templates for Oracle Database. Download: Oracle SQL Developer Data Modeler 4.0 EA 3 If you're looking for a datamodeling and database design tool that provides an environment for capturing, modeling, managing and exploiting metadata, it's time to check out Oracle SQL Developer Data Modeler. Oracle SQL Developer Data Modeler 4.0 EA V3 is here.

    Read the article

  • Database Developer - October 2013 issue: Download Database 12c and related products

    - by Javier Puerta
    The October issue of the Database Application Developer  newsletter is now available. The focus of this issue is on downloads of Database 12c and related products. (Full newsletter here) Get Ready to Download, Deploy and Develop for Oracle Database 12c This month we're focused on downloads. We've rounded up the top developer releases (both early adopter and BETA releases) and the articles that will help you do more with Oracle 12c. See the technical content that will help you get started. If you're ready...Away we go! — Laura Ramsey, Database and Developer Community, Oracle Technology Network Team FEATURED DOWNLOADS Download: Oracle Database 12c According Tom Kyte, the Oracle 12c version has some of the biggest enhancements to the core database since version 6 - Check it out for yourself. Download: Oracle SQL Developer 4.0 Early Adopter 2 is Here Oracle SQL Developer is a free IDE that simplifies the development and management of Oracle Database. It is a complete end-to-end development platform for your PL/SQL applications that features a worksheet for running queries and scripts, a DBA console for managing the database, a reports interface, a complete data modeling solution and a migration platform for moving your 3rd party databases to Oracle.  If you are interested in checking out this new early adopter version,Oracle SQL Developer 4.0 EA is the place to go. Download: Oracle 12c Multitenant Self Provisioning Application -BETA- The -BETA- is here. The Multitenant self provisioning Application is an easy and productive way for DBAs and Developers to get familiar with powerful PDB features including create, clone, plug and unplug.   No better time to start playing with PDBs. Oracle 12c Multitenant Self Provisioning Application. Download: New! Updates to Oracle Data Integration Portfolio Oracle GoldenGate 12c and Oracle Data Integrator 12c is now available. From Real-Time data integration, transactional change data capture, data replication, transformations....to hi-volume, high-performance batch loads, event-driven, trickle-feed integration process..its now available. Go here all the details and links to downloads...and Congratulations Data Integration Team!. Download: Oracle VM Templates for Oracle 12c Features Support for Single Instance, Oracle Restart and Oracle RAC Support for all current Oracle Database 11.2 versions as well as Oracle 12c on Oracle Linux 5 Update 9 & Oracle Linux 6 Update 4. The Oracle 12c templates allow end-to-end automation for Flex Cluster, Flex ASM and PDBs. See how the Deploycluster tool was updated to support Single Instance and the new Oracle 12c features. Oracle VM Templates for Oracle Database. Download: Oracle SQL Developer Data Modeler 4.0 EA 3 If you're looking for a datamodeling and database design tool that provides an environment for capturing, modeling, managing and exploiting metadata, it's time to check out Oracle SQL Developer Data Modeler. Oracle SQL Developer Data Modeler 4.0 EA V3 is here.

    Read the article

  • Real-time Big Data Analytics is a reality for StubHub with Oracle Advanced Analytics

    - by Mark Hornick
    What can you use for a comprehensive platform for real-time analytics? How can you process big data volumes for near-real-time recommendations and dramatically reduce fraud? Learn in this video what Stubhub achieved with Oracle R Enterprise from the Oracle Advanced Analytics option to Oracle Database, and read more on their story here. Advanced analytics solutions that impact the bottom line of a business are challenging due to the range of skills and individuals involved in realizing such solutions. While we hear a lot about the role of the data scientist, that role is but one piece of the puzzle. Advanced analytics solutions also have an operationalization aspect that also requires close proximity to where the transactional activity occurs. The data scientist needs access to the right data with which to model the business problem. This involves IT for data collection, management, and administration, as well as ensuring zero downtime (a website needs to be up 24x7). This also involves working with the data scientist to keep predictive models refreshed with the latest scripts. Integrating advanced analytics solutions into enterprise apps involves not just generating predictions, but supporting the whole life-cycle from data collection, to model building, model assessment, and then outcome assessment and feedback to the model building process again. Application and web interface designers need to take into account how end users will see and use the advanced analytics results, e.g., supporting operations staff that need to handle the potentially fraudulent transactions. As just described, advanced analytics projects can be "complicated" from just a human perspective. The extent to which software can simplify the interactions among users and systems will increase the likelihood of project success. The ability to quickly operationalize advanced analytics projects and demonstrate measurable value, means the difference between a successful project and just a nice research report. By standardizing on Oracle Database and SQL invocation of R, along with in-database modeling as found in Oracle Advanced Analytics, expedient model deployment and zero downtime for refreshing models becomes a reality. Meanwhile, data scientists are also able to explore leading edge techniques available in open source. The Oracle solution propels the entire organization forward to realize the value of advanced analytics.

    Read the article

  • How (recipe) to build only one kernel module?

    - by Pro Backup
    I have a bug in a Linux kernel module that causes the stock Ubuntu 14.04 kernel to oops (crash). That is why I want to edit/patch the source of only that single kernel module to add some extra debug output. The kernel module in question is mvsas and not necessary to boot. For that reason I don't see any need to update any initrd images. I have read a lot of information (as shown below) and find the setup and build process confusion. I need two recipes: to setup/configure the build environment once steps to do after editing any source file of this kernel module (.c and .h) and converting that edit into a new kernel module (.ko) The sources that have been used are: build one kernel module - Google search http://www.linuxquestions.org/questions/linux-kernel-70/rebuilding-a-single-kernel-module-595116/ http://stackoverflow.com/questions/8744087/how-to-recompile-just-a-single-kernel-module http://www.pixelbeat.org/docs/rebuild_kernel_module.html How do I build a single in-tree kernel module? http://ubuntuforums.org/showthread.php?t=1153067 http://ubuntuforums.org/showthread.php?t=2112166 http://ubuntuforums.org/showthread.php?t=1115593 build one kernel module ubuntu - Google search 'make +single +kernel +module' - Ask Ubuntu 'make +kernel +module' - Ask Ubuntu My makefile results in: No rule to make target `arch/x86/tools/relocs.c', needed '"Invalid module format"' - Ask Ubuntu Driver installation: compiling source code for newer kernel Modprobe: 'Invalid nodule format', yet works after insmod "Symbol version dump" "is missing" - Google search http://stackoverflow.com/questions/9425523/should-i-care-that-the-symbol-version-dump-is-missing-how-do-i-get-one Where can I find the corresponding Module.symvers and .config files for 12.04.3 i386 server? "no symbol version for module_layout" when trying to load usbhid.ko Broken links inside Linux header file folder 'make modules_install' - Ask Ubuntu 'modules_install' - Ask Ubuntu Empty build directory in custom compiled kernel Not able to see pr_info output In which directory are the kernel source files and how can I recompile it? How can I compile and install that patched libata-eh.c file? 'modules_install +depmod' - Ask Ubuntu modules_install depmod - Google search "make modules_install" - Google search http://www.csee.umbc.edu/courses/undergraduate/CMSC421/fall02/burt/projects/howto_build_kernel.html http://unix.stackexchange.com/questions/20864/what-happens-in-each-step-of-the-linux-kernel-building-process https://wiki.ubuntu.com/KernelCustomBuild http://www.cyberciti.biz/tips/build-linux-kernel-module-against-installed-kernel-source-tree.html http://www.linuxforums.org/forum/kernel/170617-solved-make-modules_install-different-path.html "make prepare" - Google search "make prepare" "scripts/kconfig/conf --silentoldconfig Kconfig" - Google search http://ubuntuforums.org/showthread.php?t=1963515 ubuntu "make prepare" version - Google search http://stackoverflow.com/questions/8276245/how-to-compile-a-kernel-module-against-a-new-source https://help.ubuntu.com/community/Kernel/Compile How do I compile a kernel module? How to add a custom driver to my kernel? Compile and loading kernel module without compiling the kernel

    Read the article

  • Dynamic Filtering

    - by Ricardo Peres
    Continuing my previous posts on dynamic LINQ, now it's time for dynamic filtering. For now, I'll focus on string matching. There are three standard operators for string matching, which both NHibernate, Entity Framework and LINQ to SQL recognize: Equals Contains StartsWith EndsWith So, if we want to apply filtering by one of these operators on a string property, we can use this code: public enum MatchType { StartsWith = 0, EndsWith = 1, Contains = 2, Equals = 3 } public static List Filter(IEnumerable enumerable, String propertyName, String filter, MatchType matchType) { return (Filter(enumerable, typeof(T), propertyName, filter, matchType) as List); } public static IList Filter(IEnumerable enumerable, Type elementType, String propertyName, String filter, MatchType matchType) { MethodInfo asQueryableMethod = typeof(Queryable).GetMethods(BindingFlags.Static | BindingFlags.Public).Where(m = (m.Name == "AsQueryable") && (m.ContainsGenericParameters == false)).Single(); IQueryable query = (enumerable is IQueryable) ? (enumerable as IQueryable) : asQueryableMethod.Invoke(null, new Object [] { enumerable }) as IQueryable; MethodInfo whereMethod = typeof(Queryable).GetMethods(BindingFlags.Public | BindingFlags.Static).Where(m = m.Name == "Where").ToArray() [ 0 ].MakeGenericMethod(elementType); MethodInfo matchMethod = typeof(String).GetMethod ( (matchType == MatchType.StartsWith) ? "StartsWith" : (matchType == MatchType.EndsWith) ? "EndsWith" : (matchType == MatchType.Contains) ? "Contains" : "Equals", new Type [] { typeof(String) } ); PropertyInfo displayProperty = elementType.GetProperty(propertyName, BindingFlags.Public | BindingFlags.Instance); MemberExpression member = Expression.MakeMemberAccess(Expression.Parameter(elementType, "n"), displayProperty); MethodCallExpression call = Expression.Call(member, matchMethod, Expression.Constant(filter)); LambdaExpression where = Expression.Lambda(call, member.Expression as ParameterExpression); query = whereMethod.Invoke(null, new Object [] { query, where }) as IQueryable; MethodInfo toListMethod = typeof(Enumerable).GetMethod("ToList", BindingFlags.Static | BindingFlags.Public).MakeGenericMethod(elementType); IList list = toListMethod.Invoke(null, new Object [] { query }) as IList; return (list); } var list = new [] { new { A = "aa" }, new { A = "aabb" }, new { A = "ccaa" }, new { A = "ddaadd" } }; var contains = Filter(list, "A", "aa", MatchType.Contains); var endsWith = Filter(list, "A", "aa", MatchType.EndsWith); var startsWith = Filter(list, "A", "aa", MatchType.StartsWith); var equals = Filter(list, "A", "aa", MatchType.Equals); Perhaps I'll write some more posts on this subject in the near future. SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.all();

    Read the article

  • Softpedia published some of my open source projects — how to react?

    - by polarblau
    (FYI: I've just moved this question over from Stackoverflow on recommendation.) I just received a few emails, informing me that softpedia.com has added some of my "products" to their "database of scripts, code snippets and web applications". My products are in this case some smaller open source projects, which I have hosted and published on github. Now I'm wondering how to react to this. This site is indirectly making money of my free work through ads on three pages before the actual download. They also seem to "invent" version numbers and I can't find out if they're hosting the latest or all versions of my projects. — I can see how this could lead to problems in the future, since I don't control what's "the latest" everywhere. On the other hand I don't mind some extra publicity. I want as many people as possible to know about the projects, use them, fork them and hopefully improve them. The projects in questions are really fairly small, but this might not be the case in the future for me and/or other people reading this question. I'm sure that this must have happened to others around here. What's your opinion? Should I try to get the downloads removed? Update 1 I've requested the removal and mentioned that I don't feel that Softpedia can provide the right environment for this kind of project. Their team got back to me instantly with a friendly email saying, that they'll remove the links for now: If you are worried that your projects won't be updated, then I must tell you that I have them bookmarked in my RSS reader, so any version changes will be forwarded to me when needed. So I promise I'll keep your script up to date as soon as I see an update in the repository. I have to say, that I appreciate this kind of reaction quite a lot and so I sent them another email, describing in more detail what I'm worried about and what bothers me. I also stated, that I'm aware that my license clearly permits them to host the projects in any case, but that I'd be even happy if they would host the projects as long as they could convince me of a few details and maybe make some small changes to the way the projects are represented. — Let's see where this goes. Update 2 After discussing with their contact and requesting some changes regarding display of version (they had given the possibility to do so) and authorship they put the projects back up on their site. All in all a positive and definitely interesting experience.

    Read the article

  • Manage and Monitor Identity Ranges in SQL Server Transactional Replication

    - by Yaniv Etrogi
    Problem When using transactional replication to replicate data in a one way topology from a publisher to a read-only subscriber(s) there is no need to manage identity ranges. However, when using  transactional replication to replicate data in a two way replication topology - between two or more servers there is a need to manage identity ranges in order to prevent a situation where an INSERT commands fails on a PRIMARY KEY violation error  due to the replicated row being inserted having a value for the identity column which already exists at the destination database. Solution There are two ways to address this situation: Assign a range of identity values per each server. Work with parallel identity values. The first method requires some maintenance while the second method does not and so the scripts provided with this article are very useful for anyone using the first method. I will explore this in more detail later in the article. In the first solution set server1 to work in the range of 1 to 1,000,000,000 and server2 to work in the range of 1,000,000,001 to 2,000,000,000.  The ranges are set and defined using the DBCC CHECKIDENT command and when the ranges in this example are well maintained you meet the goal of preventing the INSERT commands to fall due to a PRIMARY KEY violation. The first insert at server1 will get the identity value of 1, the second insert will get the value of 2 and so on while on server2 the first insert will get the identity value of 1000000001, the second insert 1000000002 and so on thus avoiding a conflict. Be aware that when a row is inserted the identity value (seed) is generated as part of the insert command at each server and the inserted row is replicated. The replicated row includes the identity column’s value so the data remains consistent across all servers but you will be able to tell on what server the original insert took place due the range that  the identity value belongs to. In the second solution you do not manage ranges but enforce a situation in which identity values can never get overlapped by setting the first identity value (seed) and the increment property one time only during the CREATE TABLE command of each table. So a table on server1 looks like this: CREATE TABLE T1 (  c1 int NOT NULL IDENTITY(1, 5) PRIMARY KEY CLUSTERED ,c2 int NOT NULL ); And a table on server2 looks like this: CREATE TABLE T1(  c1 int NOT NULL IDENTITY(2, 5) PRIMARY KEY CLUSTERED ,c2 int NOT NULL ); When these two tables are inserted the results of the identity values look like this: Server1:  1, 6, 11, 16, 21, 26… Server2:  2, 7, 12, 17, 22, 27… This assures no identity values conflicts while leaving a room for 3 additional servers to participate in this same environment. You can go up to 9 servers using this method by setting an increment value of 9 instead of 5 as I used in this example. Continues…

    Read the article

< Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >