Search Results

Search found 59230 results on 2370 pages for 'character set'.

Page 684/2370 | < Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >

  • CodePlex Daily Summary for Wednesday, October 16, 2013

    CodePlex Daily Summary for Wednesday, October 16, 2013Popular ReleasesHyper-V Management Pack Extensions 2012: HyperVMPE 2012 (1.0.1.206): RTM ReleaseFFXIV Crafting Simulator: Crafting Simulator 2.4: Added : - You can now drag&drop to reorganize the sequence. (Right click to remove now) - Fixed a bug with Ingenuity II not taken into consideration for quality Increase.C# Intellisense for Notepad++: Release v.1.0.8.2: Solved scrolling problem after DocumentFormatting Implemented "format as you type" --- To avoid the DLLs getting locked by OS use MSI file for the installation.CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.8.2: Solved scrolling problem after DocumentFormatting Implemented "format as you type" --- To avoid the DLLs getting locked by OS use MSI file for the installation.Collection Commander for Configuration Manager 2012: CMCollCtr 1.0.0: Change log: - MSI Setup - UI Improved - CM12 Console integration - New Powershell code snippets - Client Center IntegrationLINQ to Twitter: LINQ to Twitter v2.1.09: Supports .NET 3.5, .NET 4.0, .NET 4.5, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, Windows 8, and Windows Azure. 100% Twitter API coverage. Also supports Twitter API v1.1! Also on NuGet.Sandcastle Help File Builder: SHFB v1.9.8.0 with Visual Studio Package: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This new release contains bug fixes and feature enhancements. There are some potential breaking changes in this release as some features of the Help File Builder have been moved into...AD ACL Scanner: 1.3.2: Minor bug fixed: Powershell 4.0 will report: Select—Object: Parameter cannot be processed because the parameter name p is ambiguous.Json.NET: Json.NET 5.0 Release 7: New feature - Added support for Immutable Collections New feature - Added WriteData and ReadData settings to DataExtensionAttribute New feature - Added reference and type name handling support to extension data New feature - Added default value and required support to constructor deserialization Change - Extension data is now written when serializing Fix - Added missing casts to JToken Fix - Fixed parsing large floating point numbers Fix - Fixed not parsing some ISO date ...RESX Manager: ResxManager 0.2.1: FIXED: Many critical bugs have been fixed. New Features Error logging for improved exception handling New toolbar Improvements of user interfaceFast YouTube Downloader: YouTube Downloader 2.2.0: YouTube Downloader 2.2.0VidCoder: 1.5.8 Beta: Added hardware acceleration options: Bicubic OpenCL scaling algorithm, QSV decoding/encoding and DXVA decoding. Updated HandBrake core to SVN 5834. Updated VidCoder setup icon. Fixed crash when choosing the mp4v2 container on x86 and opening on x64. Warning: the hardware acceleration features require specific hardware or file types to work correctly: QSV: Need an Intel processor that supports Quick Sync Video encoding, with a monitor hooked up to the Intel HD Graphics output and the lat...ASP.net MVC Awesome - jQuery Ajax Helpers: 3.5.2: version 3.5.2 - fix for setting single value to multivalue controls - datepicker min max date offset fix - html encoding for keys fix - enable Column.ClientFormatFunc to be a function call that will return a function version 3.5.1 - fixed html attributes rendering - fixed loading animation rendering - css improvements version 3.5 ========================== - autosize for all popups ( can be turned off by calling in js awe.autoSize = false ) - added Parent, Paremeter extensions ...Wsus Package Publisher: Release v1.3.1310.12: Allow the Update Creation Wizard to be set in full screen mode. Fix a bug which prevent WPP to Reset Remote Sus Client ID. Change the behavior of links in the Update Detail Viewer. Left-Click to open, Right-Click to copy to the Clipboard.TerrariViewer: TerrariViewer v7 [Terraria Inventory Editor]: This is a complete overhaul but has the same core style. I hope you enjoy it. This version is compatible with 1.2.0.3 Please send issues to my Twitter or https://github.com/TJChap2840WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: WDTVHubGen.v2.1.6.maint: I think this covers all of the issues. new additions: fixed the thumbnail problem for backgrounds. general clean up and error checking. need to get this put through the wringer and all feedback is welcome.BIDS Helper: BIDS Helper 1.6.4: This BIDS Helper release brings the following new features and fixes: New Features: A new Bus Matrix style report option when you run the Printer Friendly Dimension Usage report for an SSAS cube. The Biml engine is now fully in sync with the supported subset of Varigence Mist 3.4. This includes a large number of language enhancements, bugfixes, and project deployment support. Fixed Issues: Fixed Biml execution for project connections fixing a bug with Tabular Translations Editor not a...Free language translator and file converter: Free Language Translator 3.4: fixes for new version look up.MoreTerra (Terraria World Viewer): MoreTerra 1.11.3: =========== =New Features= =========== New Markers added for Plantera's Bulb, Heart Fruits and Gold Cache. Markers now correctly display for the gems found in rock debris on the floor. =========== =Compatibility= =========== Fixed header changes found in Terraria 1.0.3.1Media Companion: Media Companion MC3.581b: Fix in place for TVDB xml issue. New* Movie - General Preferences, allow saving of ignored 'The' or 'A' to end of movie title, stored in sorttitle field. * Movie - New Way for Cropping Posters. Fixed* Movie - Rename of folders/filename. caught error message. * Movie - Fixed Bug in Save Cropped image, only saving in Pre-Frodo format if Both model selected. * Movie - Fixed Cropped image didn't take zoomed ratio into effect. * Movie - Separated Folder Renaming and File Renaming fuctions durin...New ProjectsCDEasyUI: CDEasyUIEnough XamlConverter: A collection of useful XAML converters for Windows Phone and Windows 8 developers alike.GeReS: Geres is a simple batch job manager for Azure, written in Python for general applicability. Global Excel Automation Powershell Library: The Global Excel Automation PowerShell Library is a series of scripts to help with build deployment, application configuration, database copies and Hyper-V.jean1016jabbrchang: 11katrukTestProject: katruk test projectLocal to Global Option Set Converter: Automates the task of converting a Local Option Set into a Global Option Set in Microsoft Dynamics CRM 2011.Machine Cards: Machine Cards is a card playing game!Microsoft Translator Portable Wrapper: A portable wrapper for Microsoft Translator service. Can be used in various apps types. Desktop apps(.Net Framework 4.5), Windows Phone 8, Windows Store apps.Mod.DisplayTypes: Orchard module for a url that display content items with a certain display type. Multilingual Translator & Dictionary: The Multilingual Translator & Dictionary can translate and search meanings of words / phrases in multiple languages using Google Translator and Glosbe APIs.nDistribute: This is an attempt to build a library for synchronising data across a network of machines without the use of a predetermined central server.neurogoody: js sliceboxNotepadXX: NotepadXX is one of the requirements to complete in Open source. It is a open source text editor software.ODTK: Ein Toolkit für das Rollenspiel "Das Schwarze Auge (Ulisses Verlag)" um manche abläufe beim Spielen zu vereinfachen. Kampf Übersicht, Helden DBPhoto Frame and Door Cam: A Windows Service that hosts a simple digital photo frame web page that integrates with the Blue Iris NVR to show camera alerts when motion is detected.Powershell XML Deployment: While working as a Windows Server technology specialist in Sweden in the outsourcing branch, i've discovered that people have poor since of automation.PulseMonitor: this is pulse media projectRentACarRESTApi: Rent A Car REST ApiRubricaSentimentale: testScrutR - Monitor entities and notifiy when changes: ScrutR monitors entities of an application and sends notification when the conditions are matchedSRMongoDB: ????MongoDB C# ???。 ?????QueryBuilder.cs??。TP1_Quimica: uiuuuWake On LAN Gateway: A Client/Server solution for relaying WOL magic packets. Server runs as IIS module or Windows Service. Usage via REST service or installable windows client.Weather Forecast - Team Pixie - Telerik Academy 2012/2013: Simple weather forecast sharing website.Webapplication1: WebApplication1

    Read the article

  • Google authorship verification issue

    - by Fraser
    I'm trying to get my blog content author verified so my face gets into the Google search results. I managed to achieve this a few weeks back - When testing my content in the Google authorship testing tool it reported that I had been verified and I could see my mug in the results. All I had to do was wait a couple of weeks before I started popping up in the search results (I think(?)). However, I seem to have thrown a spanner in the works. I set up Google apps for my domain and merged my old Google+ profile into my google apps account. This seemed to reset my Google+ profile (no biggy, since it was a new profile and only had 1 connection). I re-set up my G+ account and tied it all in to my blog and it's content. I am now seeing some very strange behaviour. If you take a look at one of my blog posts through the snippet testing tool: http://www.google.com/webmasters/tools/richsnippets?url=http%3A%2F%2Fblog.fraser-hart.co.uk%2Fjquery-fullscreen-background-slideshow%2F&html= You will see that it is not recognising me as an author. However, when you enter my profile URL (https://plus.google.com/108765138229426105004) into the "Authorship verification by email" input, you will see that it does in fact recognise it as verified. Now, if you try and verify the same page again, it reverts back to unverified. I thought I may have to just wait it out but this has been over a week now and previously (before I merged my profile) it happened instantaneously. Has anyone experienced this bizarre behaviour before? What is happening here? More importantly, is there anything I can do to resolve it? (Apologies for the long and boring question). Cheers!

    Read the article

  • Send Multiple InMemory Attachments Using FileUpload Controls

    - by bullpit
    I wanted to give users an ability to send multiple attachments from the web application. I did not want anything fancy, just a few FileUpload controls on the page and then send the email. So I dropped five FileUpload controls on the web page and created a function to send email with multiple attachments. Here’s the code: public static void SendMail(string fromAddress, string toAddress, string subject, string body, HttpFileCollection fileCollection)     {         // CREATE THE MailMessage OBJECT         MailMessage mail = new MailMessage();           // SET ADDRESSES         mail.From = new MailAddress(fromAddress);         mail.To.Add(toAddress);           // SET CONTENT         mail.Subject = subject;         mail.Body = body;         mail.IsBodyHtml = false;                        // ATTACH FILES FROM HttpFileCollection         for (int i = 0; i < fileCollection.Count; i++)         {             HttpPostedFile file = fileCollection[i];             if (file.ContentLength > 0)             {                 Attachment attachment = new Attachment(file.InputStream, Path.GetFileName(file.FileName));                 mail.Attachments.Add(attachment);             }         }           // SEND MESSAGE         SmtpClient smtp = new SmtpClient("127.0.0.1");         smtp.Send(mail);     } And here’s how you call the method: protected void uxSendMail_Click(object sender, EventArgs e)     {         HttpFileCollection fileCollection = Request.Files;         string fromAddress = "[email protected]";         string toAddress = "[email protected]";         string subject = "Multiple Mail Attachment Test";         string body = "Mail Attachments Included";         HelperClass.SendMail(fromAddress, toAddress, subject, body, fileCollection);            }

    Read the article

  • Should I make the Cells in a Tiledmap as null when my player hits it

    - by Vishal Kumar
    I am making a Tile Based game using Libgdx. I took the idea from SuperKoalio platformer demo by Mario Zencher. When I wanted to implement Collectables in my game , I simply draw the coins using Tiled Map Editor. When my player hits that, I use to set that cell as null. Someday on this site suggested me not to do so... never use null. I agreed. What can be any other way. If I am using layer.setCell(x,y) to set the cell to any other cell... even if an transparent one .. my player seems to be stopped by an invisible object/hurdle. This is my code: for (Rectangle tile : tiles) { if (koalaRect.overlaps(tile)) { TiledMapTileLayer layer = (TiledMapTileLayer) map.getLayers().get(1); try{ type = layer.getCell((int) tile.x, (int) tile.y).getTile().getProperties().get("tileType").toString(); } catch(Exception e){ System.out.print("Exception in Tiles Property"+e); type="nonbreakable"; } //Let us destroy this cell if(("award".equals(type))){ layer.setCell((int) tile.x, (int) tile.y, null); listener.coin(); score+=100; test = ""+layer.getCell(0, 0).getTile().getProperties().get("tileType"); } //DOING THIS GIVES A BAD EFFECT if(("killer".equals(type))){ //player.health--; //layer.setCell((int) tile.x, (int) tile.y, layer.getCell(20,0)); } // we actually reset the player y-position here // so it is just below/above the tile we collided with // this removes bouncing :) if (player.velocity.y > 0) { player.position.y = (tile.y - Player.height); } Is this a right approach? OR I should create separate Sprite Class called Coin.

    Read the article

  • Customizing the NUnit GUI for data-driven testing

    - by rwong
    My test project consists of a set of input data files which is fed into a piece of legacy third-party software. Since the input data files for this software are difficult to construct (not something that can be done intentionally), I am not going to add new input data files. Each input data file will be subject to a set of "test functions". Some of the test functions can be invoked independently. Other test functions represent the stages of a sequential operation - if an earlier stage fails, the subsequent stages do not need to be executed. I have experimented with the NUnit parametrized test case (TestCaseAttribute and TestCaseSourceAttribute), passing in the list of data files as test cases. I am generally satisfied with the the ability to select the input data for testing. However, I would like to see if it is possible to customize its GUI's tree structure, so that the "test functions" become the children of the "input data". For example: File #1 CheckFileTypeTest GetFileTopLevelStructureTest CompleteProcessTest StageOneTest StageTwoTest StageThreeTest File #2 CheckFileTypeTest GetFileTopLevelStructureTest CompleteProcessTest StageOneTest StageTwoTest StageThreeTest This will be useful for identifying the stage that failed during the processing of a particular input file. Is there any tips and tricks that will enable the new tree layout? Do I need to customize NUnit to get this layout?

    Read the article

  • Will BIOS boot mode Ubuntu install be able to boot when firmware "Fast Boot" is "Ultra Fast"?

    - by Pro Backup
    I have an AsRock mainboard with UEFI BIOS P1.50 02/14/2014. The firmware "Fast Boot" option is set to "Fast", Boot Option #1 is set to "AHCI P4: OCZ-VERT...": this is BIOS not UEFI boot. This boot disk has an MBR partitioning scheme (# parted -l | grep Partition\ Table:). Therefore Ubuntu 14.04 is installed in BIOS/CMS (Grub-PC) mode. The Ubuntu boot process ends in a text console (no GUI). There is no external graphics card in use. The stock Ubuntu kernel is replaced with Ubuntu supplied mainline 3.16.0-031600rc6-generic. dmesg outputs lines containing BIOS, like: SMBIOS 2.7 present Calgary: detecting Calgary via BIOS EBDA area Calgary: Unable to locate Rio Grande table in EBDA - bailing! [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored BIOS EDD facility v0.16 2004-Jun-25, 0 devices found The ASRock BIOS it selves display this help text for "Ultra Fast - Fast Boot": Ultra Fast mode is only supported by Windows 8 and the VBIOS must support UEFI GOP if you are using an external graphics card. Please notice that Ultra Fast mode will boot so fast that the only way to enter this UEFI Setup Utility is to Clear CMOS or run the Restart to UEFI utility in Windows. Assumptions: I suspect after changing UEFI setting "Fast Boot" to "Ultra Fast" that the machine will no longer boot into Ubuntu's console. I expect when first exchanging "Grub-pc" with "Grub-efi", that the machine will still be able to boot to a grub menu (thus allowing to change the "Fast Boot" setting back to "Fast" without clearing CMOS). Are these two "Fast Boot" assumptions correct, and/or, may I expect Ubuntu 14.04 running mainline kernel 3.16rc6 and Grub-efi to still boot to console after enabling UEFI Ultra Fast Boot?

    Read the article

  • Accessing the JSESSIONID from JSF

    - by Frank Nimphius
    The following code attempts to access and print the user session ID from ADF Faces, using the session cookie that is automatically set by the server and the Http Session object itself. FacesContext fctx = FacesContext.getCurrentInstance(); ExternalContext ectx = fctx.getExternalContext(); HttpSession session = (HttpSession) ectx.getSession(false); String sessionId = session.getId(); System.out.println("Session Id = "+ sessionId); Cookie[] cookies = ((HttpServletRequest)ectx.getRequest()).getCookies(); //reset session string sessionId = null; if (cookies != null) { for (Cookie brezel : cookies) {     if (brezel.getName().equalsIgnoreCase("JSESSIONID")) {        sessionId = brezel.getValue();        break;      }   } } System.out.println("JSESSIONID cookie = "+sessionId); Though apparently both approaches to the same thing, they are different in the value they return and the condition under which they work. The getId method, for example returns a session value as shown below grLFTNzJhhnQTqVwxHMGl0WDZPGhZFl2m0JS5SyYVmZqvrfghFxy!-1834097692!1322120041091 Reading the cookie, returns a value like this grLFTNzJhhnQTqVwxHMGl0WDZPGhZFl2m0JS5SyYVmZqvrfghFxy!-1834097692 Though both seem to be identical, the difference is within "!1322120041091" added to the id when reading it directly from the Http Session object. Dependent on the use case the session Id is looked up for, the difference may not be important. Another difference however, is of importance. The cookie reading only works if the session Id is added as a cookie to the request, which is configurable for applications in the weblogic-application.xml file. If cookies are disabled, then the server adds the session ID to the request URL (actually it appends it to the end of the URI, so right after the view Id reference). In this case however no cookie is set so that the lookup returns empty. In both cases however, the getId variant works.

    Read the article

  • Goal Tracking data seems to be inaccurate?

    - by Khuram Malik
    I setup some Goal Tracking about one week ago. I had multiple goals in one set. The goal itself was the "send" button being pressed on the callback form (i did that by pushing a pageview to Google Analytics everytime the send button is pressed) For each goal, i listed the first step as a required step. So for example, the ILR Page was step 1 and set as required and the goal was "/CallbackFormFilled" Looking at the stats a week later i'm getting some very inflated numbers especially when comparing them to my manually filled excel spreadsheet and i'm struggling to understand the cause of this behaviour. I'm unable to attach screenshots unfortunately since my StackExchange account for this site is brand new My own thoughts My own thoughts were that maybe its because i have setup multiple goals with the same end goal URL, but i thought that was a valid setup since i want to track multiple routes so to speak(?) I've disabled all other goals for now to confirm this, but im waiting for stats to come in as i write this. I also wonder if the contact form im using in Wordpress is causing a problem, but i've simply added one javascript line on the send button that pushes a pageview so not sure if that should cause an issue. Here is a link to setting up analytics on this contact form plugin in wordpress for reference: (see javascript action hook section) - http://ideasilo.wordpress.com/2009/05/31/contact-form-7-1-10/

    Read the article

  • Automatic TRIM vs. manual TRIM

    - by Eike Cochu
    I am currently trying to find out how to trim with my new TP and was wondering about the difference of manual/online trimming. Here is my setup: ThinkPad T430s with SSD Samsung 830, 128GB and Xubuntu 12.10, here are some outputs to check if trim will work on my system (got these from here: http://wiki.ubuntuusers.de/SSD/TRIM) root@eike-tp:~# sudo hdparm -I /dev/sda | grep -i TRIM * Data Set Management TRIM supported (limit 8 blocks) First, I tried the online trimming: How to enable TRIM? my fstab with discard inserted: UUID=d6c49c17-a4f1-466c-9f7e-896c20db3bba / ext4 discard,noatime,errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=a0322f5f-c6c1-4896-863f-668f0638d8cf none swap sw 0 0 tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0 I tried to test if it works (but I don't get any zeroes when I try it with /dev/sda), but found out that this method is only possible with SSD type 2 and I seem to have type 3. So I don't know if it works or not. The Ubuntuwiki (first link) recommends manual trimming, so I set up a daily cronjob instead of discard: #!/bin/sh LOG=/var/log/batched_discard.log echo "*** $(date -R) ***" >> $LOG fstrim -v / >> $LOG the wiki article suggests weekly or daily. Now to my questions: How often executes the automated trim? How often is recommended? Online vs. manual trimming? Thank you for your help

    Read the article

  • Mapping XFCE4/xRDP sessions to users

    - by garrilla
    I have Ubuntu 13.10 with Xubuntu Desktop - XFCE4. I'm trying to use XDRP to allow MS Windows users to login to the machine with their own user. I've been a lot around the houses with this! I've find two half-way solutions, but can't get them to work as I'd like... 1) in /etc/xrdp/xrdp.ini I set the port to -1 [xrdp1] name=sesman-Xvnc lib=libvnc.so username=ask password=ask ip=127.0.0.1 port=-1 each time any user logs on they get a new session - they can never go back to their original session 2) in /etc/xrdp/xrdp.ini I set the port to 5912 (e.g) [xrdp1] name=sesman-Xvnc lib=libvnc.so username=ask password=ask ip=127.0.0.1 port=5912 each time any user logs on they always log on to the same session irrespective of their logon details ??) I found a mid-way solution, to create a lot of sessions by adding adding additional options in the xrdp.ini e.g. [xrdp8] name=Bob's Logon lib=libvnc.so username=ask password=ask ip=127.0.0.1 port=5913 [xrdp9] name=Jill's Logon lib=libvnc.so username=ask password=ask ip=127.0.0.1 port=5914 and so on, but he problem with this is that Jill can still log into Bob's remote session ??? Is it possible to to do what I'm trying to do? Maybe I have to use different tools?

    Read the article

  • Bluetooth mouse Logitech M555b not recognized on a Macbook Pro 8.2

    - by Pierre
    I just bought a Logitech M555b Bluetooth mouse today. My computer (a Macbook Pro 8,2) has a bluetooth connection, and it works perfectly fine on Mac OSX (I set up a new device, click the connect button on the mouse, and voilà, done in 5 seconds). I cannot get it to work on Ubuntu 12.04. When I follow the same steps as on Mac OSX under Ubuntu 12.04, Sometimes I will see the mouse name appearing for 1 second on the screen, and if I'm fast enough to click on "Continue", I will have a message telling me the pairing with my mouse failed. Here are the steps I follow: Turn off Bluetooth on Ubuntu Turn Bluetooth on. Bluetooth icon Set up new device... Turn on the mouse, and press the Connect button. Press "Continue" on the Bluetooth New Device Setup screen I see the mouse name for one second, but then it disappear. Click on PIN options... to select '0000'. ... nothing, Ubuntu won't see anything. In the /var/log/syslog, I have the following information: May 27 23:26:16 Trane bluetoothd[896]: HCI dev 0 down May 27 23:26:16 Trane bluetoothd[896]: Adapter /org/bluez/896/hci0 has been disabled May 27 23:26:58 Trane bluetoothd[896]: HCI dev 0 up May 27 23:26:58 Trane bluetoothd[896]: Adapter /org/bluez/896/hci0 has been enabled May 27 23:26:58 Trane bluetoothd[896]: Inquiry Cancel Failed with status 0x12 May 27 23:28:26 Trane bluetoothd[896]: Refusing input device connect: No such file or directory (2) May 27 23:28:26 Trane bluetoothd[896]: Refusing connection from 00:1F:20:24:15:3D: setup in progress May 27 23:28:26 Trane bluetoothd[896]: Discovery session 0x7f3409ab5b70 with :1.84 activated I tried for an hour, I restarted my computer, re-paired the mouse on Mac OSX, etc. Sometimes when I restart the computer, I have a dialog box showing up saying that the mouse M555b wants to access some resources or something, if I click "Grant", or even if I tick "always grant access", nothing happens... How can I get the Bluetooth to work properly? Help!

    Read the article

  • (LWJGL) Pixel Unpack Buffer Object is Disabled? (glTextImage2D)

    - by OstlerDev
    I am trying to create a render target for my game so that I can re-render at a different screen size. But I am receiving the following error: Exception in thread "main" org.lwjgl.opengl.OpenGLException: Cannot use offsets when Pixel Unpack Buffer Object is disabled Here is the source code for my Render method: // clear screen GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT); // Start FBO Rendering Code // The framebuffer, which regroups 0, 1, or more textures, and 0 or 1 depth buffer. int FramebufferName = GL30.glGenFramebuffers(); GL30.glBindFramebuffer(GL30.GL_FRAMEBUFFER, FramebufferName); // The texture we're going to render to int renderedTexture = glGenTextures(); // "Bind" the newly created texture : all future texture functions will modify this texture glBindTexture(GL_TEXTURE_2D, renderedTexture); // Give an empty image to OpenGL ( the last "0" ) glTexImage2D(GL_TEXTURE_2D, 0,GL_RGB, 1024, 768, 0,GL_RGB, GL_UNSIGNED_BYTE, 0); // Poor filtering. Needed ! glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); // Set "renderedTexture" as our colour attachement #0 GL32.glFramebufferTexture(GL30.GL_FRAMEBUFFER, GL30.GL_COLOR_ATTACHMENT0, renderedTexture, 0); // Set the list of draw buffers. IntBuffer drawBuffer = BufferUtils.createIntBuffer(20 * 20); GL20.glDrawBuffers(drawBuffer); // Always check that our framebuffer is ok if(GL30.glCheckFramebufferStatus(GL30.GL_FRAMEBUFFER) != GL30.GL_FRAMEBUFFER_COMPLETE){ System.out.println("Framebuffer was not created successfully! Exiting!"); return; } // Resets the current viewport GL11.glViewport(0, 0, scaleWidth*scale, scaleHeight*scale); GL11.glMatrixMode(GL11.GL_MODELVIEW); GL11.glLoadIdentity(); // let subsystem paint if (callback != null) { callback.frameRendering(); } // update window contents Display.update(); It is crashing on this line: glTexImage2D(GL_TEXTURE_2D, 0,GL_RGB, 1024, 768, 0,GL_RGB, GL_UNSIGNED_BYTE, 0); I am not really sure why it is crashing and looking around I have not been able to find out why. Any help or insight would be greatly welcome.

    Read the article

  • Recording custom variables to identify individual users with Google Analytics

    - by mrtsherman
    I have been asked by our marketing department to add Google Analytics custom variable tracking to my company's website. As the website uses server side includes, modifications to the tracking tag roll out globally - maintenance is therefore a headache! So, if I add the following code (keeping in mind SSI so every page has the same code): // visitor level tracking, id = 12345 // Record a unique id for each visitor. When they return also track this id _gaq.push(['_setCustomVar', 1, 'id', '12345', 1]); // page level tracking // If the user signs up for our newsletter we set newsletter to true // Each page they subsequently visit should also mark this as true _gaq.push(['_setCustomVar', 1, 'newsletter', 'true', 1]); I don't use GA and the marketing people don't use custom variables, so we don't actually know how or if this will work. Therefore my questions are:- Do I want Page, Session or Visitor level tracking? What happens when the same code is used on every page? Can GA 'overwrite' a setting. For example, if I set newsletter to true on page X and then user navigates to page Y, will the variable also be marked there?

    Read the article

  • ATI Radeon HD 6870 Driver fails to install default-policy.sh does not support version

    - by Rogue Coder
    I'm running Ubuntu 11.04 Beta, with everything updated completely. I'm using Ubuntu Classic, because Unity fails to run, supposedly because of my video card. The drivers for the Radeon HD 6870 series is apparently lacking, but I found a post stating the newest version has full support for Ubuntu Natty Narwhal. That post is slightly old, so i grabbed 11.3 for Ubuntu x86 off the ATI website. When I run the installation program, I receive the following error: > ./ati-driver-installer-11-3-x86.x86_64.run Created directory fglrx-install.uREFoO Verifying archive integrity... All good. Uncompressing ATI Catalyst(TM) Proprietary Driver-8.831.2......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................... ===================================================================== ATI Technologies Catalyst(TM) Proprietary Driver Installer/Packager ===================================================================== Error: ./default_policy.sh does not support version default:v2:i686:lib::none:2.6.38-8-generic-pae:; make sure that the version is being correctly set by --iscurrentdistro ===================================================================== ATI Technologies Catalyst(TM) Proprietary Driver Installer/Packager ===================================================================== Error: ./default_policy.sh does not support version default:v2:i686:lib::none:2.6.38-8-generic-pae:; make sure that the version is being correctly set by --iscurrentdistro Removing temporary directory: fglrx-install.uREFoO > I would love to get the latest ATI drivers working so that I can try out Unity!

    Read the article

  • Architecture- Tracking lead origin when data is submitted by a server

    - by Kevin
    I'm looking for some assistance in determining the least complex strategy for tracking leads on an affiliate's website. The idea is to make the affiliate's integration with my application as easy as possible. I've run into theoretical barriers, so i'm here to explore other options. Application Overview: This is a lead aggregation / distribution platform. We will be focusing on the affiliate portion of this website. Essentially affiliates sign up, enter in marketing campaigns and sell us their conversions. Problem to be solved: We want to track a lead's origin and other events on the affiliate site. We want to know what pages, ads, and forms they viewed before they converted. This can easily be solved with pixel tracking. Very straightforward. Theoretical Issues: I thought I would ask affiliates to place the pixel where I could log impressions and set a third party cookie when the pixel is first called. Then I could associate future impressions with this cookie. The problem is that when the visitor converts on the affiliate's site and I receive their information via HTTP POST from the Affiliate's server I wouldn't be able to access the cookie and associate it with the lead record unless the lead lands on my processor via a redirect and is then redirected back to the affiliate's landing page. I don't want to force the affiliates to submit their forms directly to my tracking site, so allowing them to make an HTTP POST from their server side form processor would be ideal. I've considered writing JavaScript to set a First Party cookie but this seems to make things more complicated for the affiliate. I also considered having the affiliate submit the lead's data via a conversion pixel. This seems to be the most ideal scenario so far as almost all pixels are as easy as copy/paste. The only complication comes from the conversion pixel- which would submit all of the lead information and the request would come from the visitor's machine so I could access my third party cookie.

    Read the article

  • MATLAB: Best fitness vs mean fitness, initial range

    - by Sa Ta
    Based on the example of Rastrigin's function. At the plot function, if I chose 'best fitness', on the same graph 'mean fitness' will also be plotted. I understand well about 'best fitness' whereby it plots the best function value in each generation versus iteration number. It will reach value zero after some times. I don't understand about 'mean fitness'in the graph plotted. What do those 'mean fitness' values mean? How does the 'mean fitness' graph help to understand Rastrigin's function? What are the meaning of the term initial population, initial score and initial range? I wish to have a better understanding of these terms. The default value for initial range is [0,1]. Does it mean that 0 is the lower bound (lb) and 1 is the upper bound (ub)? Do these values interfere with the lb and ub values I set in the constraints? I try to better understand about lb and ub. If my lb is 0 and ub is 5, does it mean that my final point values will be within 0 and 5? If I know the lb and ub for my problem is between 0 and 5, do I just set the initial range as [0,5] at all times and may I assume that this is the best option for initial range, and I need not try it with any other values?

    Read the article

  • How should I show shared resources during a Shared Resource game in the Galaxy Editor?

    - by Mag Roader
    One of my favorite ways to play the original StarCraft was in a "Team" game. In this game type, multiple players on the same "team" would share control, resources, supply, and even the same starting location. It was like playing as 1 player, only 2 humans were controlling it. It was a lot of fun. I want to do something very similar in StarCraft 2, but I need to create a custom map in the Galaxy Editor to do it. I found the editor can quite easily emulate this behavior. There is a Trigger action "Set Alliance for Player Group" to "...treat each other as Ally With Shared Vision, Control, And Spending." To use this, I create units for only 1 of the players, and then set all players to be allied with each other in this way. All the other players get no units and no resources. This makes it so 1 player is the actual owner of all the units and everyone else is tagging along with full control. This nearly works! The problem is that if I am not the actual owning player, I can't actually see how many minerals/gas/supply the team has. This makes it pretty difficult to build stuff. What would be the best way to display to the other players how many Minerals/Gas/Supply the team has?

    Read the article

  • I have done everything correct on my asp.net website, re: SEO; why aren't google backlinks showing?

    - by Jason Weber
    I recently implemented many SEO techniques for a company on their asp.net website; in 6 months, we jumped from a PR1 to a PR3. But I'm having issues with google backlinking. Here are some of the things I've done: Not only did I set up their own Google+ page 6 months ago, I update it pretty much daily with links, pictures, etc., and I blog about it on my own personal Google+ page and post links, etc. ... They have their own Twitter, Facebook, YouTube, and all are updated almost daily. I've listed in as many quality, relevant directories as possible 6 months ago; I've avoided link farms. The site is solid SEO-wise. Key-phrase rich URLs, schema.org & rich snippets. No duplicate content ... www or non-www 301's, trailing slashes, etc. ... all taken care of. Probably a ton of other things, but basically, the site is all set, SEO-wise. Here's what's confounding: When I do a link:www.example.com in Bing/Yahoo, it shows many backlinks. When I do a link:www.example.com in google, it shows up 0 links. Or when I use a site-ranker like Web Site Rank Tool it's showing 0 backlinks from Google. Any suggestions would be appreciated!

    Read the article

  • C++11 Tidbits: access control under SFINAE conditions

    - by Paolo Carlini
    Lately I have been spending quite a bit of time on the SFINAE ("Substitution failure is not an error") features of C++, fixing and tweaking various bits of the GCC implementation. An important missing piece was the implementation of the resolution of DR 1170 which, in a nutshell, mandates that access checking is done as part of the substitution process. Consider: class C { typedef int type; }; template <class T, class = typename T::type> auto f(int) - char; template <class> auto f(...) -> char (&)[2]; static_assert (sizeof(f<C>(0)) == 2, "Ouch"); According to the resolution, the static_assert should not fire, and the snippet should compile successfully. The reason being that the first f overload must be removed from the candidate set because C::type is private to C. On the other hand, before the resolution of DR 1170, the expected behavior was for the first overload to remain in the candidate set, win over the second one, to eventually lead to an access control error (*). GCC mainline (would be 4.8) finally implements the DR, thus benefiting the many modern programming techniques heavily exploiting SFINAE, among which certainly the GNU C++ runtime library itself, which relies on it for the internals of <type_traits> and in several other places. Note that the resolution of the DR is active even in C++98 mode, not just in C++11 mode, because it turned out that the traditional behavior, as implemented in GCC, wasn't fully consistent in all the possible circumstances. (*) In practice, GCC didn't really implement this, the static_assert triggered instead.

    Read the article

  • Variant Management– Which Approach fits for my Product?

    - by C. Chadwick
    Jürgen Kunz – Director Product Development – Oracle ORACLE Deutschland B.V. & Co. KG Introduction In a difficult economic environment, it is important for companies to understand the customer requirements in detail and to address them in their products. Customer specific products, however, usually cause increased costs. Variant management helps to find the best combination of standard components and custom components which balances customer’s product requirements and product costs. Depending on the type of product, different approaches to variant management will be applied. For example the automotive product “car” or electronic/high-tech products like a “computer”, with a pre-defined set of options to be combined in the individual configuration (so called “Assembled to Order” products), require a different approach to products in heavy machinery, which are (at least partially) engineered in a customer specific way (so-called “Engineered-to Order” products). This article discusses different approaches to variant management. Starting with the simple Bill of Material (BOM), this article presents three different approaches to variant management, which are provided by Agile PLM. Single level BOM and Variant BOM The single level BOM is the basic form of the BOM. The product structure is defined using assemblies and single parts. A particular product is thus represented by a fixed product structure. As soon as you have to manage product variants, the single level BOM is no longer sufficient. A variant BOM will be needed to manage product variants. The variant BOM is sometimes referred to as 150% BOM, since a variant BOM contains more parts and assemblies than actually needed to assemble the (final) product – just 150% of the parts You can evolve the variant BOM from the single level BOM by replacing single nodes with a placeholder node. The placeholder in this case represents the possible variants of a part or assembly. Product structure nodes, which are part of any product, are so-called “Must-Have” parts. “Optional” parts can be omitted in the final product. Additional attributes allow limiting the quantity of parts/assemblies which can be assigned at a certain position in the Variant BOM. Figure 1 shows the variant BOM of Agile PLM. Figure 1 Variant BOM in Agile PLM During the instantiation of the Variant BOM, the placeholders get replaced by specific variants of the parts and assemblies. The selection of the desired or appropriate variants is either done step by step by the user or by applying pre-defined configuration rules. As a result of the instantiation, an independent BOM will be created (Figure 2). Figure 2 Instantiated BOM in Agile PLM This kind of Variant BOM  can be used for „Assembled –To-Order“ type products as well as for „Engineered-to-Order“-type products. In case of “Assembled –To-Order” type products, typically the instantiation is done automatically with pre-defined configuration rules. For „Engineered- to-Order“-type products at least part of the product is selected manually to make use of customized parts/assemblies, that have been engineered according to the specific custom requirements. Template BOM The Template BOM is used for „Engineered-to-Order“-type products. It is another type of variant BOM. The engineer works in a flexible environment which allows him to build the most creative solutions. At the same time the engineer shall be guided to re-use existing solutions and it shall be assured that product variants of the same product family share the same base structure. The template BOM defines the basic structure of products belonging to the same product family. Let’s take a gearbox as an example. The customer specific configuration of the gearbox is influenced by several parameters (e.g. rpm range, transmitted torque), which are defined in the customer’s requirement document.  Figure 3 shows part of a Template BOM (yellow) and its relation to the product family hierarchy (blue).  Figure 3 Template BOM Every component of the Template BOM has links to the variants that have been engineeried so far for the component (depending on the level in the Template BOM, they are product variants, Assembly Variant or single part variants). This library of solutions, the so-called solution space, can be used by the engineers to build new product variants. In the best case, the engineer selects an existing solution variant, such as the gearbox shown in figure 3. When the existing variants do not fulfill the specific requirements, a new variant will be engineered. This new variant must be compliant with the given Template BOM. If we look at the gearbox in figure 3  it must consist of a transmission housing, a Connecting Plate, a set of Gears and a Planetary transmission – pre-assumed that all components are must have components. The new variant will enhance the solution space and is automatically available for re-use in future variants. The result of the instantiation of the Template BOM is a stand-alone BOM which represents the customer specific product variant. Modular BOM The concept of the modular BOM was invented in the automotive industry. Passenger cars are so-called „Assembled-to-Order“-products. The customer first selects the specific equipment of the car (so-called specifications) – for instance engine, audio equipment, rims, color. Based on this information the required parts will be determined and the customer specific car will be assembled. Certain combinations of specification are not available for the customer, because they are not feasible from technical perspective (e.g. a convertible with sun roof) or because the combination will not be offered for marketing reasons (e.g. steel rims with a sports line car). The modular BOM (yellow structure in figure 4) is defined in the context of a specific product family (in the sample it is product family „Speedstar“). It is the same modular BOM for the different types of cars of the product family (e.g. sedan, station wagon). The assembly or single parts of the car (blue nodes in figure 4) are assigned at the leaf level of the modular BOM. The assignment of assembly and parts to the modular BOM is enriched with a configuration rule (purple elements in figure 4). The configuration rule defines the conditions to use a specific assembly or single part. The configuration rule is valid in the context of a type of car (green elements in figure 4). Color specific parts are assigned to the color independent parts via additional configuration rules (grey elements in figure 4). The configuration rules use Boolean operators to connect the specifications. Additional consistency rules (constraints) may be used to define invalid combinations of specification (so-called exclusions). Furthermore consistency rules may be used to add specifications to the set of specifications. For instance it is important that a car with diesel engine always is build using the high capacity battery.  Figure 4 Modular BOM The calculation of the car configuration consists of several steps. First the consistency rules (constraints) are applied. Resulting from that specification might be added automatically. The second step will determine the assemblies and single parts for the complete structure of the modular BOM, by evaluating the configuration rules in the context of the current type of car. The evaluation of the rules for one component in the modular BOM might result in several rules being fulfilled. In this case the most specific rule (typically the longest rule) will win. Thanks to this approach, it is possible to add a specific variant to the modular BOM without the need to change any other configuration rules.  As a result the whole set of configuration rules is easy to maintain. Finally the color specific assemblies respective parts will be determined and the configuration is completed. Figure 5 Calculated Car Configuration The result of the car configuration is shown in figure 5. It shows the list of assemblies respective single parts (blue components in figure 5), which are required to build the customer specific car. Summary There are different approaches to variant management. Three different approaches have been presented in this article. At the end of the day, it is the type of the product which decides about the best approach.  For „Assembled to Order“-type products it is very likely that you can define the configuration rules and calculate the product variant automatically. Products of type „Engineered-to-Order“ ,however, need to be engineered. Nevertheless in the majority of cases, part of the product structure can be generated automatically in a similar way to „Assembled to Order“-tape products.  That said it is important first to analyze the product portfolio, in order to define the best approach to variant management.

    Read the article

  • RPi and Java Embedded GPIO: Java code to blink more LEDs

    - by hinkmond
    Now, it's time to blink the other GPIO ports with the other LEDs connected to them. This is easy using Java Embedded, since the Java programming language is powerful and flexible. Embedded developers are not used to this, since the C programming language is more popular but less easy to develop in. We just need to use a dynamic Java String array to map to the pinouts of the GPIO port names from the previous diagram posted. This way we can address each "channel" with an index into that String array. static String[] GpioChannels = { "0", "1", "4", "17", "21", "22", "10", "9" }; With this new dynamic array, we can streamline the main() of this Java program to activate all the ports. /** * @param args the command line arguments */ public static void main(String[] args) { FileWriter[] commandChannels; try { /*** Init GPIO port for output ***/ // Open file handles to GPIO port unexport and export controls FileWriter unexportFile = new FileWriter("/sys/class/gpio/unexport"); FileWriter exportFile = new FileWriter("/sys/class/gpio/export"); for (String gpioChannel : GpioChannels) { System.out.println(gpioChannel); // Reset the port unexportFile.write(gpioChannel); unexportFile.flush(); // Set the port for use exportFile.write(gpioChannel); exportFile.flush(); // Open file handle to port input/output control FileWriter directionFile = new FileWriter("/sys/class/gpio/gpio" + gpioChannel + "/direction"); // Set port for output directionFile.write(GPIO_OUT); directionFile.flush(); } And, then simply add array code to where we blink the LED to make it blink all the LEDS on and off at once. /*** Send commands to GPIO port ***/ commandChannels = new FileWriter[GpioChannels.length]; for (int channum=0; channum It's easier than falling off a log... or at least easier than C programming. Hinkmond

    Read the article

  • Collision detection in 3D space

    - by dreta
    I've got to write, what can be summed up as, a compelte 3D game from scratch this semester. Up untill now i have only programmed 2D games in my spare time, the transition doesn't seem tough, the game's simple. The only issue i have is collision detection. The only thing i could find was AABB, bounding spheres or recommendations of various physics engines. I have to program a submarine that's going to be moving freely inside of a cave system, AFAIK i can't use physics libraries, so none of the above solves my problem. Up untill now i was using SAT for my collision detection. Are there any similar, great algorithms, but crafted for 3D collision? I'm not talking about octrees, or other optimalizations, i'm talking about direct collision detection of one set of 3D polygons with annother set of 3D polygons. I thought about using SAT twice, project the mesh from the top and the side, but then it seems so hard to even divide 3D space into convex shapes. Also that seems like far too much computation even with octrees. How do proffessionals do it? Could somebody shed some light.

    Read the article

  • Most efficient way to handle coordinate maps in Java

    - by glowcoder
    I have a rectangular tile-based layout. It's your typical Cartesian system. I would like to have a single class that handles two lookup styles Get me the set of players at position X,Y Get me the position of player with key K My current implementation is this: class CoordinateMap<V> { Map<Long,Set<V>> coords2value; Map<V,Long> value2coords; // convert (int x, int y) to long key - this is tested, works for all values -1bil to +1bil // My map will NOT require more than 1 bil tiles from the origin :) private Long keyFor(int x, int y) { int kx = x + 1000000000; int ky = y + 1000000000; return (long)kx | (long)ky << 32; } // extract the x and y from the keys private int[] coordsFor(long k) { int x = (int)(k & 0xFFFFFFFF) - 1000000000; int y = (int)((k >>> 32) & 0xFFFFFFFF) - 1000000000; return new int[] { x,y }; } } From there, I proceed to have other methods that manipulate or access the two maps accordingly. My question is... is there a better way to do this? Sure, I've tested my class and it works fine. And sure, something inside tells me if I want to reference the data by two different keys, I need two different maps. But I can also bet I'm not the first to run into this scenario. Thanks!

    Read the article

  • Windows Azure and Server App Fabric &ndash; kinsmen or distant relatives?

    - by kaleidoscope
    Technorati Tags: tinu,windows azure,windows server,app fabric,caching windows azure If you are into Windows Azure then it would be rather demeaning to ask if you are aware of Windows Azure App Fabric. Just in case you are not - Windows Azure App Fabric provides a secure connectivity service by means of which developers can build distributed applications as well as services that work across network and organizational boundaries in the cloud. But some of you may have heard of another similar term floating around forums and blog posts - Windows Server App Fabric. The momentary déjà vu that you might have felt upon encountering it is not unheard of in the Cloud Computing circles - http://social.msdn.microsoft.com/Forums/en/netservices/thread/5ad4bf92-6afb-4ede-b4a8-6c2bcf8f2f3f http://forums.virtualizationtimes.com/session-state-management-using-windows-server-app-fabric Many have fallen prey to this ambiguous nomenclature but its not without a purpose. First announced at PDC 2009, Windows Server AppFabric is a set of application services focused on improving the speed, scale, and management of Web, Composite, and Enterprise applications. Initially codenamed Dublin the app fabric (oops....Windows Server App Fabric) provides add-ons like Monitoring,Tracking and Persistence into your hosted Workflow and Services without the Developer worried about these Functionalities. Alongwith this it also provides Distributed In-Memory caching features from Velocity caching. In short it is a healthy equivalent of Windows Azure App Fabric minus the cloud part. So why bring this up while talking about Windows Azure? Well, apart from their similar last names these powers are soon to be combined if Microsoft's roadmap is to be believed - "Together, Windows Server AppFabric and Windows Azure platform AppFabric provide a comprehensive set of services that help developers rapidly develop new applications spanning Windows Azure and Windows Server, and which also interoperate with other industry platforms such as Java, Ruby, and PHP." One of the most powerful features of the Windows Server App Fabric is its distributed caching mechanism which if appropriately leveraged with the Windows Azure App Fabric could very well mean a revolution in the Session Management techniques for the Azure platform. Well Microsoft, we do have our fingers crossed..... Read on... http://blogs.technet.com/windowsserver/archive/2010/03/01/windows-server-appfabric-beta-2-available.aspx

    Read the article

  • Deleting a row from self-referencing table

    - by Jake Rutherford
    Came across this the other day and thought “this would be a great interview question!” I’d created a table with a self-referencing foreign key. The application was calling a stored procedure I’d created to delete a row which caused but of course…a foreign key exception. You may say “why not just use a the cascade delete option?” Good question, easy answer. With a typical foreign key relationship between different tables which would work. However, even SQL Server cannot do a cascade delete of a row on a table with self-referencing foreign key. So, what do you do?…… In my case I re-wrote the stored procedure to take advantage of recursion:   -- recursively deletes a Foo ALTER PROCEDURE [dbo].[usp_DeleteFoo]      @ID int     ,@Debug bit = 0    AS     SET NOCOUNT ON;     BEGIN TRANSACTION     BEGIN TRY         DECLARE @ChildFoos TABLE         (             ID int         )                 DECLARE @ChildFooID int                        INSERT INTO @ChildFoos         SELECT ID FROM Foo WHERE ParentFooID = @ID                 WHILE EXISTS (SELECT ID FROM @ChildFoos)         BEGIN             SELECT TOP 1                 @ChildFooID = ID             FROM                 @ChildFoos                             DELETE FROM @ChildFoos WHERE ID = @ChildFooID                         EXEC usp_DeleteFoo @ChildFooID         END                                    DELETE FROM dbo.[Foo]         WHERE [ID] = @ID                 IF @Debug = 1 PRINT 'DEBUG:usp_DeleteFoo, deleted - ID: ' + CONVERT(VARCHAR, @ID)         COMMIT TRANSACTION     END TRY     BEGIN CATCH         ROLLBACK TRANSACTION         DECLARE @ErrorMessage VARCHAR(4000), @ErrorSeverity INT, @ErrorState INT         SELECT @ErrorMessage = ERROR_MESSAGE(), @ErrorSeverity = ERROR_SEVERITY(), @ErrorState = ERROR_STATE()         IF @ErrorState <= 0 SET @ErrorState = 1         INSERT INTO ErrorLog(ErrorNumber,ErrorSeverity,ErrorState,ErrorProcedure,ErrorLine,ErrorMessage)         VALUES(ERROR_NUMBER(), @ErrorSeverity, @ErrorState, ERROR_PROCEDURE(), ERROR_LINE(), @ErrorMessage)         RAISERROR (@ErrorMessage, @ErrorSeverity, @ErrorState)     END CATCH   This procedure will first determine any rows which have the row we wish to delete as it’s parent. It then simply iterates each child row calling the procedure recursively in order to delete all ancestors before eventually deleting the row we wish to delete.

    Read the article

< Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >