Search Results

Search found 62759 results on 2511 pages for 'view first'.

Page 684/2511 | < Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >

  • Python PyBluez loses Bluetooth connection after a while

    - by Travis G.
    I am using Python to write a simple serial Bluetooth script that sends information about my computer stats periodically. The receiving device is a Sparkfun BlueSmirf Silver. The problem is that, after the script runs for a few minutes, it stops sending packets to the receiver and fails with the error: (11, 'Resource temporarily unavailable') Noticing that this inevitably happens, I added some code to automatically try to reopen the connection. However, then I get: Could not connect: (16, 'Device or resource busy') Am I doing something wrong with the connection? Do I need to occasionally reopen the socket? I'm not sure how to recover from this type of error. I understand that sometimes the port will be busy and a write operation is deferred to avoid blocking other processes, but I wouldn't expect the connection to fail so regularly. Any thoughts? Here is the script: import psutil import serial import string import time import bluetooth sampleTime = 1 numSamples = 5 lastTemp = 0 TEMP_CHAR = 't' USAGE_CHAR = 'u' SENSOR_NAME = 'TC0D' #gauges = serial.Serial() #gauges.port = '/dev/rfcomm0' #gauges.baudrate = 9600 #gauges.parity = 'N' #gauges.writeTimeout = 0 #gauges.open() filename = '/sys/bus/platform/devices/applesmc.768/temp2_input' def parseSensorsOutputLinux(output): return int(round(float(output) / 1000)) def connect(): while(True): try: gaugeSocket = bluetooth.BluetoothSocket(bluetooth.RFCOMM) gaugeSocket.connect(('00:06:66:42:22:96', 1)) break; except bluetooth.btcommon.BluetoothError as error: print "Could not connect: ", error, "; Retrying in 5s..." time.sleep(5) return gaugeSocket; gaugeSocket = connect() while(1): usage = psutil.cpu_percent(interval=sampleTime) sensorFile = open(filename) temp = parseSensorsOutputLinux(sensorFile.read()) try: #gauges.write(USAGE_CHAR) gaugeSocket.send(USAGE_CHAR) #gauges.write(chr(int(usage))) #write the first byte gaugeSocket.send(chr(int(usage))) #print("Wrote usage: " + str(int(usage))) #gauges.write(TEMP_CHAR) gaugeSocket.send(TEMP_CHAR) #gauges.write(chr(temp)) gaugeSocket.send(chr(temp)) #print("Wrote temp: " + str(temp)) except bluetooth.btcommon.BluetoothError as error: print "Caught BluetoothError: ", error time.sleep(5) gaugeSocket = connect() pass gaugeSocket.close() EDIT: I should add that this code connects fine after I power-cycle the receiver and start the script. However, it fails after the first exception until I restart the receiver. P.S. This is related to my recent question, Why is /dev/rfcomm0 giving PySerial problems?, but that was more about PySerial specifically with rfcomm0. Here I am asking about general rfcomm etiquette.

    Read the article

  • Root Access: Don Dodge and Louis Gray on Entrepreneurs

    Root Access: Don Dodge and Louis Gray on Entrepreneurs Between them both, Don Dodge and Louis Gray have worked at the smallest of startups, raised VC rounds big and small, launched companies for the first time, and seen their share of successes and failures. Now at Google, they talk about some of the secret ingredients that make teams, ideas and companies work. From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • How do I install Sheetmaker?

    - by Justas
    Sheetmaker generates Moviesheets and similar coversheets for TV-Shows. I would like to install Sheetmaker, but it looks a bit complicated for me. I have downloaded and followed the installation instructions from : http://www.users.on.net/~garstev99/wdtv/installation.html and also from this source: http://wdtvforum.com/main/index.php?topic=7912.0 but have stacked on the first step - I can't run ModuleTest.pl in terminal - terminal is opening for 1-2seconds and then closing without any results. I'll be very appreciated if someone could help me to execute this program. Related question

    Read the article

  • Quickie Guide Getting Java Embedded Running on Raspberry Pi

    - by hinkmond
    Gary C. and I did a Bay Area Java User Group presentation of how to get Java Embedded running on a RPi. See: here. But, if you want the Quickie Guide on how to get Java up and running on the RPi, then follow these steps (which I'm doing right now as we speak, since I got my RPi in the mail on Monday. Woo-hoo!!!). So, follow along at home as I do the same steps here on my board... 1. Download the Win32DiskImager if you are on Windows, or use dd on a Linux PC: https://launchpad.net/win32-image-writer/0.6/0.6/+download/win32diskimager-binary.zip 2. Download the RPi Debian Wheezy image from here: http://files.velocix.com/c1410/images/debian/7/2012-08-08-wheezy-armel/2012-08-08-wheezy-armel.zip 3. Insert a blank 4GB SD Card into your Windows or Linux PC. 4. Use either Win32DiskImager or Linux dd to burn the unzipped image from #2 to the SD Card. 5. Insert the SD Card into your RPi. Connect an Ethernet cable to your RPi to your network. Connect the RPi Power Adapter. 6. The RPi will boot onto your network. Find its IP address using Windows Wireshark or Linux: sudo tcpdump -vv -ieth0 port 67 and port 68 7. ssh to your RPi: ssh <ip_addr_rpi> -l pi <Password: "raspberry"> 8. Download Java SE Embedded: http://www.oracle.com/technetwork/java/embedded/downloads/javase/index.html NOTE: First click accept, then choose the first bundle in the list: ARMv6/7 Linux - Headless EABI, VFP, SoftFP ABI, Little Endian - ejre-7u6-fcs-b24-linux-arm-vfp-client_headless-10_aug_2012.tar.gz 9. scp the bundle from #8 to your RPi: scp <ejre-bundle> pi@<ip_addr_rpi> 10. mkdir /usr/local, untar the bundle from #9 and rename (move) the ejre1.7.0_06 directory to /usr/local/java That's it! You are ready to roll with Java Embedded on your RPi. Hinkmond

    Read the article

  • A conversation with Paul Rademacher and Mano Marks, Google Maps API Office Hours

    A conversation with Paul Rademacher and Mano Marks, Google Maps API Office Hours This is a conversation between Paul Rademacher and Mano Marks on April 24th, 2012. Paul created the first Google Maps Mashup, housingmaps.com, and discusses his latest project, Stratocam, which allows users to find and display beautiful satellite and aerial imagery with the Google Maps API. From: GoogleDevelopers Views: 1199 11 ratings Time: 40:08 More in Science & Technology

    Read the article

  • Autoscaling in a modern world&hellip;. Part 3

    - by Steve Loethen
    The Wasabi Hands on Labs give you a good look at the basic mechanics, but I don’t find the setup too practical.  Using a local console application to host the Autoscaler and rules files is probably the (IMHO) least likely architecture.  Far more common would be hosting in a service on premise (if you want to have the Autoscaler local) or most likely, host it in a Azure role of it’s own.  I chose to go the Azure route. First step was to get the rules.xml and the services.xml files into the cloud.  I tend to be a “one step at a time” sort of guy, so running the console application with the rules sitting in a Azure hosted set of blobs seemed to be the logical first step.  Here are the steps: 1) Create a container in the storage account you wish to use.  Name does not matter, you will get a chance to set the container name (as well as the file names) in the app.config 2) Copy the two files from where you created them to your  container.  I used the same files I had locally.  I made the container public to eliminate security issues, but in the final application, a bit of security needs to be applied (one problem at a time).  The content type was set to text/xml.  I found one reference claiming the importance of this step, and it makes sense. 3) Adjust the app.config to set the location of the files.  This will let you set all the storage account and key information needed to reach into the cloud form your console application.  The sections of your app.config will look like this: <rulesStores> <add name="Blob Rules Store" type="Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling.Rules.Configuration.BlobXmlFileRulesStore, Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling, Version=5.0.1118.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" blobContainerName="[ContainerName]" blobName="rules.xml" storageAccount="DefaultEndpointsProtocol=https;AccountName=[StorageAccount];AccountKey=[AccountKey]" monitoringRate="00:00:30" certificateThumbprint="" certificateStoreLocation="LocalMachine" checkCertificateValidity="false" /> </rulesStores> <serviceInformationStores> <add name="Blob Service Information Store" type="Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling.ServiceModel.Configuration.BlobXmlFileServiceInformationStore, Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling, Version=5.0.1118.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" blobContainerName="[ContainerName]" blobName="services.xml" storageAccount="DefaultEndpointsProtocol=https;AccountName=[StorageAccount];AccountKey=[AccountKey]" monitoringRate="00:00:30" certificateThumbprint="" certificateStoreLocation="LocalMachine" checkCertificateValidity="false" /> </serviceInformationStores> Once I had the files up in the sky, I renamed the local copies to just to make my self feel better about the application using the correct set of rules and services.  Deploy the web role to the cloud.  Once it is up and running, start the console application.  You should find the application scales up and down in response to the buttons on the web site.  Tune in next time for moving the hosting of the Autoscaler to a worker role, discussions on getting the logging information into diagnostics into storage, and a set of discussions about certs and how they play a role.

    Read the article

  • Azure Flavor for the Sharepoint Media Component

    - by spano
    Some time ago I wrote about a Media Processing Component for Sharepoint that I was working on. It is a Media Assets list for Sharepoint that lets you choose where to store the blob files. It provides also intelligence for encoding videos, generating thumbnail and poster images, obtaining media metadata, etc. On that first post the component was explained in detail, with the original 3 storage flavors: Sharepoint list, Virtual Directoy or FTP. The storage manager is extensible, so a new flavor was...(read more)

    Read the article

  • Some date math fun

    - by drsql
    Note: This concept presented is pretty simple and I am not claiming that I am the first to do this… so if you were the one who came up with this, let me know and I will give you linkage Today I was needing to get the data for the current month for a query, and it hit me that I really didn’t have a good way to do this.  There were two common methods that people use: WHERE YEAR(MonthColumn) = YEAR(Getdate()) AND MONTH(MonthColumn) = MONTH(Getdate()) But this particular method is pretty horrible...(read more)

    Read the article

  • Some date math fun

    - by drsql
    Note: This concept presented is pretty simple and I am not claiming that I am the first to do this… so if you were the one who came up with this, let me know and I will give you linkage Today I was needing to get the data for the current month for a query, and it hit me that I really didn’t have a good way to do this.  There were two common methods that people use: WHERE YEAR(MonthColumn) = YEAR(Getdate()) AND MONTH(MonthColumn) = MONTH(Getdate()) But this particular method is pretty horrible...(read more)

    Read the article

  • Android camera AVD error 100

    - by Cristian Voina
    I am trying to learn how to make a simple app in android. As background information I have only programmed in C language, no OOP. Currently I am trying turn on the camera using the indications from Android Developer site but some minor changes: - no button for capturing image. - no new activity. What I am trying to do is jut preview the camera. I will post the Code that I am using, the manifest and the LogCat. Main Activity: package com.example.camera_display; import android.app.Activity; import android.hardware.Camera; import android.os.Bundle; import android.view.Menu; import android.widget.FrameLayout; import android.widget.TextView; public class MainActivity extends Activity { private Camera mcamera; private CameraPreview mCameraPreview; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); mcamera = getCameraInstance(); mCameraPreview = new CameraPreview(this, mcamera); FrameLayout preview = (FrameLayout) findViewById(R.id.camera_preview); preview.addView(mCameraPreview); } public static Camera getCameraInstance(){ Camera c = null; try { c = Camera.open(); // attempt to get a Camera instance } catch (Exception e){ // Camera is not available (in use or does not exist) } return c; // returns null if camera is unavailable } @Override public boolean onCreateOptionsMenu(Menu menu) { // Inflate the menu; this adds items to the action bar if it is present. getMenuInflater().inflate(R.menu.main, menu); return true; } } CameraPreview Class: package com.example.camera_display; import java.io.IOException; import android.content.Context; import android.hardware.Camera; import android.util.Log; import android.view.SurfaceHolder; import android.view.SurfaceView; /** A basic Camera preview class */ public class CameraPreview extends SurfaceView implements SurfaceHolder.Callback { private static final String TAG = "MyActivity"; private SurfaceHolder mHolder; private Camera mCamera; public CameraPreview(Context context, Camera camera) { super(context); mCamera = camera; // Install a SurfaceHolder.Callback so we get notified when the // underlying surface is created and destroyed. mHolder = getHolder(); mHolder.addCallback(this); // deprecated setting, but required on Android versions prior to 3.0 mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); } public void surfaceCreated(SurfaceHolder holder) { // The Surface has been created, now tell the camera where to draw the preview. try { mCamera.setPreviewDisplay(holder); mCamera.startPreview(); } catch (IOException e) { Log.d(TAG, "Error setting camera preview: " + e.getMessage()); } } public void surfaceDestroyed(SurfaceHolder holder) { // empty. Take care of releasing the Camera preview in your activity. } public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) { // If your preview can change or rotate, take care of those events here. // Make sure to stop the preview before resizing or reformatting it. if (mHolder.getSurface() == null){ // preview surface does not exist return; } // stop preview before making changes try { mCamera.stopPreview(); } catch (Exception e){ // ignore: tried to stop a non-existent preview } // set preview size and make any resize, rotate or // reformatting changes here // start preview with new settings try { mCamera.setPreviewDisplay(mHolder); mCamera.startPreview(); } catch (Exception e){ Log.d(TAG, "Error starting camera preview: " + e.getMessage()); } } } Manifest: <uses-permission android:name="android.permission.CAMERA"/> <uses-feature android:name="android.hardware.camera"/> LOG CAT: 06-30 15:58:35.075: D/libEGL(1153): loaded /system/lib/egl/libEGL_emulation.so 06-30 15:58:35.147: D/(1153): HostConnection::get() New Host Connection established 0x2a156060, tid 1153 06-30 15:58:35.478: D/libEGL(1153): loaded /system/lib/egl/libGLESv1_CM_emulation.so 06-30 15:58:35.515: D/libEGL(1153): loaded /system/lib/egl/libGLESv2_emulation.so 06-30 15:58:36.334: W/EGL_emulation(1153): eglSurfaceAttrib not implemented 06-30 15:58:36.685: D/OpenGLRenderer(1153): Enabling debug mode 0 06-30 15:58:36.935: D/MyActivity(1153): Error starting camera preview: startPreview failed 06-30 15:58:36.965: I/Choreographer(1153): Skipped 125 frames! The application may be doing too much work on its main thread. 06-30 15:58:38.455: W/Camera(1153): Camera server died! 06-30 15:58:38.455: W/Camera(1153): ICamera died 06-30 15:58:38.476: E/Camera(1153): Error 100 So If anyone could tell me what I am doing wrong (and explain why it should be done different) that would be great :) Thanks!

    Read the article

  • Unity quick launch icon opens another program icon on program start

    - by Patrick
    I prepared a custom .desktop file for my chromium browser that loads a different user profile. When ever i click the icon now it open the browser, but also uses a new icon on the launcher bar (not locked to the launcher). Is there any way to tell the program to keep connected to the first icon? [Desktop Entry] Type=Application Name=Second Browser Exec=chromium-browser --user-data-dir="/home/patrick/bin/chrome-profiles/second" TryExec=chromium-browser Icon=/home/patrick/.local/share/applications/icons/browser.png MimeType=text/html;

    Read the article

  • Keep Track of Your Tasks with toDoo

    - by Asian Angel
    A tasks list can be convenient but most times you can not include details for those tasks or have to have an online account to do so. If you want to keep your tasks list with you on your computer or laptop and be able to add plenty of details then you might want to look at toDoo. Note: Requires Adobe AIR (download link at bottom of article). toDoo in Action Once you have installed toDoo everything is rather straightforward for getting started. The first time that you start toDoo there will be a temporary “fill-in” for the “Subject & Details Areas”. Simply highlight over the temporary text and add your information. Notice that if desired you can easily set a custom date and time for your tasks right below the “Details Area”. Note: toDoo does not minimize to the “System Tray”. Once you have everything set all that you need to do is click on “add task”. Here was our first new task being viewed in the “toDoo Description Tab”. Time to add a second task…here you can see the drop-down calendar. You can scroll through and select a different month very easily…just click on the desired day and it will be automatically set. Adding our second task… If you need to edit any of the details for a particular task you can do so in the “Edit toDoo Tab”. This nice little app is convenient and easy to use. Conclusion ToDoo is a simple straightforward app that lets you keep track of your tasks list and relevant details without an online account (especially helpful if you are without a wireless connection at a given moment). If you are looking for more of a list approach that runs on your desktop, then check out our article on Doomi here. Links Download ToDoo at Softpedia Download ToDoo at Adobe Marketplace Download Adobe AIR Similar Articles Productive Geek Tips Turn Chrome’s New Tab Page into a Google Tasks PageMake To-Do Bar in Outlook 2007 Show Only Today’s TasksAdd a non-Google Tasks List to ChromeKeep Track of Homework Assignments with SoshikuTrack the Amount of Time You Spend Online in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad Hide Your Confidential Files Inside Images Get Wildlife Photography Tips at BBC’s PhotoMasterClasses Mashpedia is a Real-time Encyclopedia

    Read the article

  • Does 'Web Pages' use the same syntax as 'MVC'?

    - by Laberto
    I see that there is a new model in ASP.NET development which called 'ASP.NET Web Pages'. I would like to know if this model resembles the ASP.NET MVC Model. The point is that I found it difficult to learn ASP.NET MVC and someone told me: OK, if you learn ASP.NET Web Pages at first then learning ASP.NET MVC will be easier because of the Razor syntax in both models. Could you please tell me the truth if you have tried both?

    Read the article

  • F# in 90 Seconds

    - by Ben Griswold
    I mentioned in a previous post that we’ve started a languages club at the office.  In an effort to decide which language we will first concentrate on, I volunteered to give the rundown on F#.  Rather than providing a summary here, I’ve provided my slide deck for your viewing enjoyment.  There’s nothing special here outside of a some pretty cool characters from The 56 Geeks Project by Scott Johnson and collection of information from my prior functional programming presentations.   Download F# in 90 Seconds

    Read the article

  • Unity DontDestroyOnLoad causing scenes to stay open

    - by jkrebsbach
    Originally posted on: http://geekswithblogs.net/jkrebsbach/archive/2014/08/11/unity-dontdestroyonload-causing-scenes-to-stay-open.aspxMy Unity project has a class (ClientSettings) where most of the game state & management properties are stored.  Among these are some utility functions that derive from MonoBehavior.  However, between every scene this object was getting recreated and I was losing all sorts of useful data.  I learned that with DontDestroyOnLoad, I can persist this entity between scenes.  Super.Persisting information between scenesThe problem with adding DontDestroyOnLoad to my "ClientSettings" was suddenly my previous scene would stay alive, and continue to execute its update routines.  An important part of the documentation helps shed light to my issues:"If the object is a component or game object then its entire transform hierarchy will not be destroyed either."My ClientSettings script was attached to the main camera on my first scene.  Because of this, the Main Camera was part of the hierarchy of the component, and therefore was also not able to destroy when switching scenes.  Now the first scene's main camera Update routine continues to execute after the second scene is running - causing me to have some very nasty bugs.Suddenly I wasn't sure how I should be creating a persistent entity - so I created a new sandbox project and tested different approaches until I found one that works:In the main scene: Create an empty Game Object:  "GameManager" - and attach the ClientSettings script to this game object.  Set any properties to the clientsettings script as appropriate.Create a prefab, using the GameManager.Remove the Game Object from the main scene.In the Main Camera, I created a script:  Main Script.  This is my primary script for the main scene.<code> public GameObject[] prefabs; private ClientSettings _clientSettings; // Use this for initialization void Start () { GameObject res = (GameObject)Instantiate(prefabs[0]); }</code>Now go back out to scene view, and add the new GameManager prefab to the prefabs collection of MainScript.When the main scene loads, the GameManager is set up, but is not part of the main scene's hierarchy, so the two are no longer tied up together.Now in our second scene, we have a script - SecondScript - and we can get a reference to the ClientSettings we created in the previous scene like so:<code>private ConnectionSettings _clientSettings; // Use this for initialization void Start () { _clientSettings = FindObjectOfType<ConnectionSettings> (); }</code>And the scenes can start and finish without creating strange long-running scene side effects.

    Read the article

  • links for 2010-04-14

    - by Bob Rhubart
    Why business needs should shape IT architecture - McKinsey Quarterly - Business Technology - Organization "Too often, efforts to fix architecture issues remain rooted in a company’s IT practices, culture, and leadership. The reason, in part, is that the chief architect—the overall IT-architecture program leader—is frequently selected from within the technical ranks, bringing deep IT know-how but little direct experience or influence in leading a business-wide change program. A weak linkage to the business creates a void that limits the quality of the resulting IT architecture and the organization’s ability to enforce and sustain the benefits of implementation over time." -- Helge Buckow and Stéphane Rey (tags: architecture it technology enterprise mckinsey) Eric Maurice: April 2010 Critical Patch Update Released Eric Maurice offers the details on April 2010 Critical Patch Update (CPUApr2010), "the first one to include security fixes for Oracle Solaris" (tags: oracle otn database fusionmiddleware peoplesoft security) @shivmohan: Oracle – OAF – Oracle Application Framework – OA Framework "For all the PL/SQL and Oracle Forms developers out there, start planning your evolution. Sure PL/SQL and Forms will be around for some time, but you need to add more skills to your stack if you want to stay current (employable)." -- Shivmohan Purohit (tags: oracle otn application framework) @ORACLENERD: APEX Architecture Oracle ACE Chet Justice offer a "short list of potential architectures" for Oracle APEX, based on his experience with a client. (tags: oracle otn oracleace apex architecture) Luis Moreno Campos: Why is Exadata so fast? "You could find a lot of tech doc around oracle.com, but the bottom line is that the vision to even build a V2 and place it as an OLTP and DW (general purpose) machine is just pure genius." -- Luis Moreno Campos (tags: oracle otn exadata database) Edwin Biemond: Resetting Weblogic datasources with ANT Oracle ACE and Whitehorses architect Edwin Biemond shares an ANT script "to fire some WLST and Python commandos" to correct invalid database session states. (tags: oracle otn oracleace database ANT Python) @deltalounge: The future of MySQL with Oracle Peter Paul van de Beek has compiled an informative collection of Edward Scriven quotes, from various publications, on Oracle's plans for MySQL. (tags: oracle otn database mysql) Cristobal Soto: Coherence Special Interest Group: First Meeting in Toronto, Upcoming Events in New York and California Cameron Purdy, Patrick Peralta, and others are speaking at upcoming Coherence SIG events. Cristobal Soto shares the details. (tags: oracle otn coherence sig grid appserver)

    Read the article

  • FREE eBook: .NET Performance Testing and Optimization (Part 1)

    In this this first part of complete guide to performance profiling, Paul Glavich and Chris Farrell explain why performance testing is a good idea and walk you through everything you need to know to set up a test environment. This comprehensive guide to getting started is an essential handbook to any programmer looking to set up a .NET testing environment and get the best results out of it. Download your free copy now span.fullpost {display:none;}

    Read the article

  • My Name is Andy and I Have an Associates of Applied Science Degree

    - by andyleonard
    Introduction This post is the forty-first part of a ramble-rant about the software business. The current posts in this series can be found on the series landing page . This post is about the education requirements for job listings. Bachelor’s Degree Required I’m not qualified for any job that requires a bachelor’s degree because I don’t have one. I regularly receive email from recruiters asking if I’m available and interested in a position they have open, or a position they’ve been asked to fill...(read more)

    Read the article

  • SQL Server 2012 : Changes to system objects in RC0

    - by AaronBertrand
    As with every new major milestone, one of the first things I do is check out what has changed under the covers. Since RC0 was released yesterday, I've been poking around at some of the DMV and other system changes. Here is what I have noticed: New objects in RC0 that weren't in CTP3 Quick summary: We see a bunch of new aggregates for use with geography and geometry. I've stayed away from that area of programming so I'm not going to dig into them. There is a new extended procedure called sp_showmemo_xml....(read more)

    Read the article

  • Replication Services as ETL extraction tool

    - by jorg
    In my last blog post I explained the principles of Replication Services and the possibilities it offers in a BI environment. One of the possibilities I described was the use of snapshot replication as an ETL extraction tool: “Snapshot Replication can also be useful in BI environments, if you don’t need a near real-time copy of the database, you can choose to use this form of replication. Next to an alternative for Transactional Replication it can be used to stage data so it can be transformed and moved into the data warehousing environment afterwards. In many solutions I have seen developers create multiple SSIS packages that simply copies data from one or more source systems to a staging database that figures as source for the ETL process. The creation of these packages takes a lot of (boring) time, while Replication Services can do the same in minutes. It is possible to filter out columns and/or records and it can even apply schema changes automatically so I think it offers enough features here. I don’t know how the performance will be and if it really works as good for this purpose as I expect, but I want to try this out soon!” Well I have tried it out and I must say it worked well. I was able to let replication services do work in a fraction of the time it would cost me to do the same in SSIS. What I did was the following: Configure snapshot replication for some Adventure Works tables, this was quite simple and straightforward. Create an SSIS package that executes the snapshot replication on demand and waits for its completion. This is something that you can’t do with out of the box functionality. While configuring the snapshot replication two SQL Agent Jobs are created, one for the creation of the snapshot and one for the distribution of the snapshot. Unfortunately these jobs are  asynchronous which means that if you execute them they immediately report back if the job started successfully or not, they do not wait for completion and report its result afterwards. So I had to create an SSIS package that executes the jobs and waits for their completion before the rest of the ETL process continues. Fortunately I was able to create the SSIS package with the desired functionality. I have made a step-by-step guide that will help you configure the snapshot replication and I have uploaded the SSIS package you need to execute it. Configure snapshot replication   The first step is to create a publication on the database you want to replicate. Connect to SQL Server Management Studio and right-click Replication, choose for New.. Publication…   The New Publication Wizard appears, click Next Choose your “source” database and click Next Choose Snapshot publication and click Next   You can now select tables and other objects that you want to publish Expand Tables and select the tables that are needed in your ETL process In the next screen you can add filters on the selected tables which can be very useful. Think about selecting only the last x days of data for example. Its possible to filter out rows and/or columns. In this example I did not apply any filters. Schedule the Snapshot Agent to run at a desired time, by doing this a SQL Agent Job is created which we need to execute from a SSIS package later on. Next you need to set the Security Settings for the Snapshot Agent. Click on the Security Settings button.   In this example I ran the Agent under the SQL Server Agent service account. This is not recommended as a security best practice. Fortunately there is an excellent article on TechNet which tells you exactly how to set up the security for replication services. Read it here and make sure you follow the guidelines!   On the next screen choose to create the publication at the end of the wizard Give the publication a name (SnapshotTest) and complete the wizard   The publication is created and the articles (tables in this case) are added Now the publication is created successfully its time to create a new subscription for this publication.   Expand the Replication folder in SSMS and right click Local Subscriptions, choose New Subscriptions   The New Subscription Wizard appears   Select the publisher on which you just created your publication and select the database and publication (SnapshotTest)   You can now choose where the Distribution Agent should run. If it runs at the distributor (push subscriptions) it causes extra processing overhead. If you use a separate server for your ETL process and databases choose to run each agent at its subscriber (pull subscriptions) to reduce the processing overhead at the distributor. Of course we need a database for the subscription and fortunately the Wizard can create it for you. Choose for New database   Give the database the desired name, set the desired options and click OK You can now add multiple SQL Server Subscribers which is not necessary in this case but can be very useful.   You now need to set the security settings for the Distribution Agent. Click on the …. button Again, in this example I ran the Agent under the SQL Server Agent service account. Read the security best practices here   Click Next   Make sure you create a synchronization job schedule again. This job is also necessary in the SSIS package later on. Initialize the subscription at first synchronization Select the first box to create the subscription when finishing this wizard Complete the wizard by clicking Finish The subscription will be created In SSMS you see a new database is created, the subscriber. There are no tables or other objects in the database available yet because the replication jobs did not ran yet. Now expand the SQL Server Agent, go to Jobs and search for the job that creates the snapshot:   Rename this job to “CreateSnapshot” Now search for the job that distributes the snapshot:   Rename this job to “DistributeSnapshot” Create an SSIS package that executes the snapshot replication We now need an SSIS package that will take care of the execution of both jobs. The CreateSnapshot job needs to execute and finish before the DistributeSnapshot job runs. After the DistributeSnapshot job has started the package needs to wait until its finished before the package execution finishes. The Execute SQL Server Agent Job Task is designed to execute SQL Agent Jobs from SSIS. Unfortunately this SSIS task only executes the job and reports back if the job started succesfully or not, it does not report if the job actually completed with success or failure. This is because these jobs are asynchronous. The SSIS package I’ve created does the following: It runs the CreateSnapshot job It checks every 5 seconds if the job is completed with a for loop When the CreateSnapshot job is completed it starts the DistributeSnapshot job And again it waits until the snapshot is delivered before the package will finish successfully Quite simple and the package is ready to use as standalone extract mechanism. After executing the package the replicated tables are added to the subscriber database and are filled with data:   Download the SSIS package here (SSIS 2008) Conclusion In this example I only replicated 5 tables, I could create a SSIS package that does the same in approximately the same amount of time. But if I replicated all the 70+ AdventureWorks tables I would save a lot of time and boring work! With replication services you also benefit from the feature that schema changes are applied automatically which means your entire extract phase wont break. Because a snapshot is created using the bcp utility (bulk copy) it’s also quite fast, so the performance will be quite good. Disadvantages of using snapshot replication as extraction tool is the limitation on source systems. You can only choose SQL Server or Oracle databases to act as a publisher. So if you plan to build an extract phase for your ETL process that will invoke a lot of tables think about replication services, it would save you a lot of time and thanks to the Extract SSIS package I’ve created you can perfectly fit it in your usual SSIS ETL process.

    Read the article

  • DevExpress XAF, Behavior Driven Development (BDD), Domain Driven Development (DDD) and more&ndash;Introduction

    - by Patrick Liekhus
    OK.  I admit it.  I have been horrible at this blogging thing.  However, I have made a commitment to get better at it so here goes.  I have many crazy ideas when it comes to coding and how to make my processes better and now is the time to get them down on paper and get your feedback.  Now, these ideas might not be nearly as wild and crazy as Charlie Sheen, but at least they help me get through my coding assignments. So let’s start by laying out the vision and objectives of this exercise.  I have been trying to come up with the best set of tools, tips and practices so I can get a small team to be as productive as possible without burning out my resources.  My thoughts tend to lean towards the coding practices first as this is what I have been doing for years.  However, as one looks at the process as a whole, we need to remember to keep the users in mind.  If we don’t have a user to accept our application, do we really have an application in the first place? I have been using a commercial framework from DevExpress called eXpress Application Framework (XAF) with their eXpress Persistent Objects (XPO) behind the scenes for a few years.  We have had tremendous success with it and even implemented a code generation layer to save us some time.  Now we want more!!! My goals here are to create a technical stack that employs as many UI’s as possible, while being true to the layers and documenting the process along the way.  I will continue to have a series of these posts that will walk through each step as I work on it.  Right now here is what I have planned: Defining the solution SCRUM/Agile Story Planning Overview of Architectural Plan Feature Driven Development Domain Driven Development Persistence Layer with XPO Windows UI with XAF/XPO Web UI with XAF/XPO OData Services Layer Windows Mobile UI Android UI iPhone UI Blackberry UI Excel UI Outlook UI Lessons Learned I will explain the solution that I plan to implement in the next post.  Thanks again and let me know what you think.

    Read the article

< Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >