Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 1576/1620 | < Previous Page | 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583  | Next Page >

  • Integration Patterns with Azure Service Bus Relay, Part 3: Anonymous partial-trust consumer

    - by Elton Stoneman
    This is the third in the IPASBR series, see also: Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service Integration Patterns with Azure Service Bus Relay, Part 2: Anonymous full-trust .NET consumer As the patterns get further from the simple .NET full-trust consumer, all that changes is the communication protocol and the authentication mechanism. In Part 3 the scenario is that we still have a secure .NET environment consuming our service, so we can store shared keys securely, but the runtime environment is locked down so we can't use Microsoft.ServiceBus to get the nice WCF relay bindings. To support this we will expose a RESTful endpoint through the Azure Service Bus, and require the consumer to send a security token with each HTTP service request. Pattern applicability This is a good fit for scenarios where: the runtime environment is secure enough to keep shared secrets the consumer can execute custom code, including building HTTP requests with custom headers the consumer cannot use the Azure SDK assemblies the service may need to know who is consuming it the service does not need to know who the end-user is Note there isn't actually a .NET requirement here. By exposing the service in a REST endpoint, anything that can talk HTTP can be a consumer. We'll authenticate through ACS which also gives us REST endpoints, so the service is still accessed securely. Our real-world example would be a hosted cloud app, where we we have enough room in the app's customisation to keep the shared secret somewhere safe and to hook in some HTTP calls. We will be flowing an identity through to the on-premise service now, but it will be the service identity given to the consuming app - the end user's identity isn't flown through yet. In this post, we’ll consume the service from Part 1 in ASP.NET using the WebHttpRelayBinding. The code for Part 3 (+ Part 1) is on GitHub here: IPASBR Part 3. Authenticating and authorizing with ACS We'll follow the previous examples and add a new service identity for the namespace in ACS, so we can separate permissions for different consumers (see walkthrough in Part 1). I've named the identity partialTrustConsumer. We’ll be authenticating against ACS with an explicit HTTP call, so we need a password credential rather than a symmetric key – for a nice secure option, generate a symmetric key, copy to the clipboard, then change type to password and paste in the key: We then need to do the same as in Part 2 , add a rule to map the incoming identity claim to an outgoing authorization claim that allows the identity to send messages to Service Bus: Issuer: Access Control Service Input claim type: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier Input claim value: partialTrustConsumer Output claim type: net.windows.servicebus.action Output claim value: Send As with Part 2, this sets up a service identity which can send messages into Service Bus, but cannot register itself as a listener, or manage the namespace. RESTfully exposing the on-premise service through Azure Service Bus Relay The part 3 sample code is ready to go, just put your Azure details into Solution Items\AzureConnectionDetails.xml and “Run Custom Tool” on the .tt files.  But to do it yourself is very simple. We already have a WebGet attribute in the service for locally making REST calls, so we are just going to add a new endpoint which uses the WebHttpRelayBinding to relay that service through Azure. It's as easy as adding this endpoint to Web.config for the service:         <endpoint address="https://sixeyed-ipasbr.servicebus.windows.net/rest"                   binding="webHttpRelayBinding"                    contract="Sixeyed.Ipasbr.Services.IFormatService"                   behaviorConfiguration="SharedSecret">         </endpoint> - and adding the webHttp attribute in your endpoint behavior:           <behavior name="SharedSecret">             <webHttp/>             <transportClientEndpointBehavior credentialType="SharedSecret">               <clientCredentials>                 <sharedSecret issuerName="serviceProvider"                               issuerSecret="gl0xaVmlebKKJUAnpripKhr8YnLf9Neaf6LR53N8uGs="/>               </clientCredentials>             </transportClientEndpointBehavior>           </behavior> Where's my WSDL? The metadata story for REST is a bit less automated. In our local webHttp endpoint we've enabled WCF's built-in help, so if you navigate to: http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/help - you'll see the uri format for making a GET request to the service. The format is the same over Azure, so this is where you'll be connecting: https://[your-namespace].servicebus.windows.net/rest/reverse?string=abc123 Build the service with the new endpoint, open that in a browser and you'll get an XML version of an HTTP status code - a 401 with an error message stating that you haven’t provided an authorization header: <?xml version="1.0"?><Error><Code>401</Code><Detail>MissingToken: The request contains no authorization header..TrackingId:4cb53408-646b-4163-87b9-bc2b20cdfb75_5,TimeStamp:10/3/2012 8:34:07 PM</Detail></Error> By default, the setup of your Service Bus endpoint as a relying party in ACS expects a Simple Web Token to be presented with each service request, and in the browser we're not passing one, so we can't access the service. Note that this request doesn't get anywhere near your on-premise service, Service Bus only relays requests once they've got the necessary approval from ACS. Why didn't the consumer need to get ACS authorization in Part 2? It did, but it was all done behind the scenes in the NetTcpRelayBinding. By specifying our Shared Secret credentials in the consumer, the service call is preceded by a check on ACS to see that the identity provided is a) valid, and b) allowed access to our Service Bus endpoint. By making manual HTTP requests, we need to take care of that ACS check ourselves now. We do that with a simple WebClient call to the ACS endpoint of our service; passing the shared secret credentials, we will get back an SWT: var values = new System.Collections.Specialized.NameValueCollection(); values.Add("wrap_name", "partialTrustConsumer"); //service identity name values.Add("wrap_password", "suCei7AzdXY9toVH+S47C4TVyXO/UUFzu0zZiSCp64Y="); //service identity password values.Add("wrap_scope", "http://sixeyed-ipasbr.servicebus.windows.net/"); //this is the realm of the RP in ACS var acsClient = new WebClient(); var responseBytes = acsClient.UploadValues("https://sixeyed-ipasbr-sb.accesscontrol.windows.net/WRAPv0.9/", "POST", values); rawToken = System.Text.Encoding.UTF8.GetString(responseBytes); With a little manipulation, we then attach the SWT to subsequent REST calls in the authorization header; the token contains the Send claim returned from ACS, so we will be authorized to send messages into Service Bus. Running the sample Navigate to http://localhost:2028/Sixeyed.Ipasbr.WebHttpClient/Default.cshtml, enter a string and hit Go! - your string will be reversed by your on-premise service, routed through Azure: Using shared secret client credentials in this way means ACS is the identity provider for your service, and the claim which allows Send access to Service Bus is consumed by Service Bus. None of the authentication details make it through to your service, so your service is not aware who the consumer is (MSDN calls this "anonymous authentication").

    Read the article

  • jquery ajax error cannot find url outside of debug mode

    - by John Orlandella Jr.
    I inherited some code two weeks ago that is using the jquery.ajax method to connect to a .NET web service. Here is the piece of code give me the trouble... if (MSCTour.AppSettings.OFFLINE !== 'TRUE') { $.ajax({ url: url, data: json, type: "POST", contentType: "application/json", timeout: 10000, dataType: "json", // not "json" we'll parse success: function(res){ if (!callback) { return; } /* // *** Use json library so we can fix up MS AJAX dates */ var result = ""; if (res !== "") { try { result = $.evalJSON(res); } catch (e) { result = {}; bare = true; } } /* // *** Bare message IS result */ if (bare) { callback(result); return; } /* // *** Wrapped message contains top level object node // *** strip it off */ for (var property in result) { callback(result[property]); break; } }, error: function(xhr,status,error){ if (status === 'parsererror') {} else {return error;} }, complete: function(res, status){ if (callback) { if ((status != 'success' && status != 'error') || status === 'parsererror' || (status === 'timeout' && res !== '')) { try { result = $.secureEvalJSON(res); } catch (e) { result = {}; bare = true; } callback(res); } } return; } }); } The url variable at this point equals /testsite/service.svc/GetItems Now here is where my problem lies... When running this site out of debug mode through visual studio I am not having any problem connecting to the database through the web service and seeing all my data, for both viewing and updating. When I go through the normal web server for the same site, on the same page, no data is showing up. When I put a break on the error portion of the code above in firebug this is information I am getting in the image linked below. link text I am getting what appears to be a 404 error, but when I look on the server all of the files are in the right place... coupled with the fact that it works when in debug mode, I think I am slowly going crazy staring at these same lines of code trying to find the needle in the haystack. Any help or just a direction to look in would be greatly appreciated.

    Read the article

  • Jqeury Validate() special function

    - by kevin
    Hi there, i'm using the Jquery validate() plugin for some forms and it goes great. The only thing is that i have an input field that requires a special validation process. Here is how it goes: The Jquery validate plugin is called in the domready for all the required fields. Here is an exemple for an input: <li> <label for="nome">Nome completo*</label> <input name="nome" type="text" id="nome" class="required"/> </li> And here is how i call my special function: <li> <span id="sprytextfield1"> <label for="cpf">CPF* (xxxxxxxxxxx)</label> <input name="cpf" type="text" id="cpf" maxlength="15" class="required" /> <span class="textfieldInvalidFormatMsg">CPF Inv&aacute;lido.</span> </span> </li> And at the bottom of the file i call the Spry function: <script type="text/javascript"> <!-- var sprytextfield1 = new Spry.Widget.ValidationTextField("sprytextfield1","cpf"); //--> </script> Of course i call the Spry css and js files in the head section as well as my special-validate.js. When i just use the Jquery validate() plugin and click on the send button, the page goes automatically back to the first mistaken input field and shows the error type (not a number, not a valid email etc.). But with this new function, this "going-back-to-the-first-mistake" feature doesnt work, of course, because the validate() function sees it all good. I already added a rule for another form (about pictures upload) and it goes like this: $("#commentForm").validate({ rules: { foto34: { required: true, accept: "jpg|png|gif" } } }); Now my question is, how can i add the special validation function as a rule of the whole validation process ? Here is the page to understand it better: link text and the special field is the first one: CPF. Hope i was clear explaining my problem. Thanks in advance. kevin

    Read the article

  • Creating a music catalog in C# and extracting first 30 seconds as soon as the first words are sung

    - by Rad
    I already read a question: Separation of singing voice from music. I don’t need this complex audio processing. I only need some detection mechanism that would detect that there is some voice/vocal playing while the music is playing (or not playing) I need to extract first 30 seconds when a vocalist starts singing along with full band music. See question 2 below. I want to create a music catalog using ASP.NET MVC 2 and Silverlight clients and C#.NET 4.0 programming language that would be front store. On the backend I would also like to create a desktop WPF/Windows application to create the music catalog from already existing music files, most of which have metadata in them ID3v1, ID3v2.3, ID3v2.4, iTunes MP4, WMA, Vorbis Comments and APE Tags etc. I would possibly like to create a web service that would allow catalog contributors to upload a zipped album and trigger metadata extraction of music data and extraction of music segments as described below. I would be happy if I achieve no. 1 below. Let's say I have 1000ths of songs in mp3 (or other formats) grouped in subfolders using some classification (Genre, Artists, Albums, Composers or other groupings). I want to create tables in DB that would organize songs so they can be searched based on different criteria (year, length, above classification or by song title, description etc) like what iTune store allows to their customers. I want to extract metadata from various formats (I will try to get songs in mp3 format, but there may be other popular formats) and allow music Catalog manager person to add missing data from either desktop or web applications. He or other contributors can upload zipped music via an HTML or Silverlight upload or WPF. Can anybody suggest open source libraries, articles, code snippets that can do that in an automatic way using .NET and possibly SQL Server DB? My main questions are these. This is an audio processing challenge. I want to extract 2 segments of music (questions 1 and 2): 1. How to extract a music segment: 1-2 seconds before a vocal starts singing and up to 30 seconds from that point in time and 2. Much more challenging is to find repeating segments (One would usually find or recognize the names of the songs and songs are usually known by these refrains. How would I go about creating a list of songs that go great together like what Genius from iTune does? Is there any characteristics of music that can be used to match songs? The goal is for people quickly scan and recognize songs i.e. associate melody, words with a title/album so they can make intelligent decisions like buying a song, create similar mood songs. Thanks, Rad

    Read the article

  • Creating a Document Library with Content Type in code

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2013/10/15/154360.aspxIn the past, I have shown how to create a list content type and add the content type to a list in code.  As a Developer, many of the artifacts which we create are widgets which have a List or Document Library as the back end.   We need to be able to create our applications (Web Part, etc.) without having the user involved except to enter the list item data.  Today, I will show you how to do the same with a document library.    A summary of what we will do is as follows:   1.   Create an Empty SharePoint Project in Visual Studio 2.   Add a Code Folder in the solution and Drag and Drop Utilities and Extensions Libraries to the solution 3.   Create a new Feature and add and event receiver  all the code will be in the event receiver 4.   Add the fields which will extend the built-in Document content type 5.   If the Content Type does not exist, Create it 6.   If the Document Library does not exist, Create it with the new Content Type inherited from the Document Content Type 7.   Delete the Document Content Type from the Library (as we have a new one which inherited from it) 8.   Add the fields which we want to be visible from the fields added to the new Content Type   Here we go:   Create an Empty SharePoint Project in Visual Studio      Add a Code Folder in the solution and Drag and Drop Utilities and Extensions Libraries to the solution       The Utilities and Extensions Library will be part of this project which I will provide a download link at the end of this post.  Drag and drop them into your project.  If Dragged and Dropped from windows explorer you will need to show all files and then include them in your project.  Change the Namespace to agree with your project.   Create a new Feature and add and event receiver  all the code will be in the event receiver.  Here We added a new Feature called “CreateDocLib”  and then right click to add an Event Receiver All of our code will be in this Event Receiver.  For this Demo I will only be using the Feature Activated Event.      From this point on we will be looking at code!    We are adding two constants for use columGroup (How we want SharePoint to Group them, usually Company Name) and ctName(ContentType Name)  using System; using System.Runtime.InteropServices; using System.Security.Permissions; using Microsoft.SharePoint; namespace CreateDocLib.Features.CreateDocLib { /// <summary> /// This class handles events raised during feature activation, deactivation, installation, uninstallation, and upgrade. /// </summary> /// <remarks> /// The GUID attached to this class may be used during packaging and should not be modified. /// </remarks> [Guid("56e6897c-97c4-41ac-bc5b-5cd2c04f2dd1")] public class CreateDocLibEventReceiver : SPFeatureReceiver { const string columnGroup = "DJ"; const string ctName = "DJDocLib"; } }     Here we are creating the Feature Activated event.   Adding the new fields (Site Columns) ,  Testing if the Content Type Exists, if not adding it.  Testing if the document Library exists, if not adding it.   #region DocLib public override void FeatureActivated(SPFeatureReceiverProperties properties) { using (SPWeb spWeb = properties.GetWeb() as SPWeb) { //add the fields addFields(spWeb); //add content type SPContentType testCT = spWeb.ContentTypes[ctName]; // we will not create the content type if it exists if (testCT == null) { //the content type does not exist add it addContentType(spWeb, ctName); } if ((spWeb.Lists.TryGetList("MyDocuments") == null)) { //create the list if it dosen't to exist CreateDocLib(spWeb); } } } #endregion The addFields method uses the utilities library to add site columns to the site. We can add as many fields within this method as we like. Here we are adding one for demonstration purposes. Icon as a Url type.  public void addFields(SPWeb spWeb) { Utilities.addField(spWeb, "Icon", SPFieldType.URL, false, columnGroup); }The addContentType method add the new Content Type to the site Content Types. We have already checked to see that it does not exist. In addition, here is where we add the linkages from our site columns previously created to our new Content Type   private static void addContentType(SPWeb spWeb, string name) { SPContentType myContentType = new SPContentType(spWeb.ContentTypes["Document"], spWeb.ContentTypes, name) { Group = columnGroup }; spWeb.ContentTypes.Add(myContentType); addContentTypeLinkages(spWeb, myContentType); myContentType.Update(); } Here we are adding just one linkage as we only have one additional field in our Content Type public static void addContentTypeLinkages(SPWeb spWeb, SPContentType ct) { Utilities.addContentTypeLink(spWeb, "Icon", ct); } Next we add the logic to create our new Document Library, which we have already checked to see if it exists.  We create the document library and turn on content types.  Add the new content type and then delete the old “Document” content types.   private void CreateDocLib(SPWeb web) { using (var site = new SPSite(web.Url)) { var web1 = site.RootWeb; var listId = web1.Lists.Add("MyDocuments", string.Empty, SPListTemplateType.DocumentLibrary); var lib = web1.Lists[listId] as SPDocumentLibrary; lib.ContentTypesEnabled = true; var docType = web.ContentTypes[ctName]; lib.ContentTypes.Add(docType); lib.ContentTypes.Delete(lib.ContentTypes["Document"].Id); lib.Update(); AddLibrarySettings(web1, lib); } }  Finally, we set some document library settings on our new document library with the AddLibrarySettings method. We then ensure that the new site column is visible when viewed in the browser.  private void AddLibrarySettings(SPWeb web, SPDocumentLibrary lib) { lib.OnQuickLaunch = true; lib.ForceCheckout = true; lib.EnableVersioning = true; lib.MajorVersionLimit = 5; lib.EnableMinorVersions = true; lib.MajorWithMinorVersionsLimit = 5; lib.Update(); var view = lib.DefaultView; view.ViewFields.Add("Icon"); view.Update(); } Okay, what's cool here: In a few lines of code, we have created site columns, A content Type, a document library. As a developer, I use this functionality all the time. For instance, I could now just add a web part to this same solutionwhich uses this document Library. I love SharePoint! Here is the complete solution: Create Document Library Code

    Read the article

  • Including hibernate jar dependencies in ant build

    - by Patrick
    Hi, I'm trying to compile a runnable jar-file for a project that makes use of hibernate. I'm trying to construct an ant build.xml file to streamline my build process, but I'm having troubles with the inclusion of the hibernate3.jar inside the final jar-file. If I run the ant script I manage to include all my library jars, and they are put in the final jar-file's root. When I run the jar-file I get a java.lang.NoClassDefFoundError: org/hibernate/Session error. If I make use of the built-in export to jar in Eclipse, it works only if I choose "extract required libraries into jar". But that bloats the jar, and includes too much of my project (i.e. unit tests). Below is my generated manifest: Manifest-Version: 1.0 Main-Class: main.ServerImpl Class-Path: ./ antlr-2.7.6.jar commons-collections-3.1.jar dom4j-1.6.1.jar hibernate3.jar javassist-3.9.0.GA.jar jta-1.1.jar slf4j-api-1.5.11.jar slf4j-simple-1.5.11.jar mysql-connector-java-5.1.12-bin.jar rmiio-2.0.2.jar commons-logging-1.1.1.jar And the part of the build.xml looks like this: <target name="dist" depends="compile" description="Generates the Distribution Jar(s)"> <mkdir dir="${dist.dir}" /> <jar destfile="${dist.dir}/${dist.file.name}.jar" basedir="${build.prod.dir}" filesetmanifest="mergewithoutmain"> <manifest> <attribute name="Main-Class" value="${main.class}" /> <attribute name="Class-Path" value="./ ${manifest.classpath} " /> <attribute name="Implementation-Title" value="${app.name}" /> <attribute name="Implementation-Version" value="${app.version}" /> <attribute name="Implementation-Vendor" value="${app.vendor}" /> </manifest> <zipfileset refid="hibernatefiles" /> <zipfileset refid="slf4jfiles" /> <zipfileset refid="mysqlfiles" /> <zipfileset refid="commonsloggingfiles" /> <zipfileset refid="rmiiofiles" /> </jar> </target> The refids' for the zipfilesets point to the directories in a library directory lib in the root of the project. The manifest.classpath-variable takes the classpath of all those library jar-files, and flattens them with pathconvert and mapper. I've also tried to set the manifest classpath to ".", "./" and only the library jar, but to no difference at all. I'm hoping there's a simple remedy to my problems...

    Read the article

  • Query an XmlDocument without getting a 'Namespace prefix is not defined' problem

    - by Dan Revell
    I've got an Xml document that both defines and references some namespaces. I load it into an XmlDocument object and to the best of my knowledge I create a XmlNamespaceManager object with which to query Xpath against. Problem is I'm getting XPath exceptions that the namespace "my" is not defined. How do I get the namespace manager to see that the namespaces I am referencing are already defined. Or rather how do I get the namespace definitions from the document to the namespace manager. Furthermore tt strikes me as strange that you have to provide a namespace manager to the document which you create from the documents nametable in the first place. Even if you need to hardcode manual namespaces why can't you add them directly to the document. Why do you always have to pass this namespace manager with every single query? What can't XmlDocument just know? The Code: XmlDocument xmlDoc = new XmlDocument(); xmlDoc.Load(programFiles + @"Common Files\Microsoft Shared\web server extensions\12\TEMPLATE\FEATURES\HfscBookingWorkflow\template.xml"); XmlNamespaceManager ns = new XmlNamespaceManager(xmlDoc.NameTable); XmlNode referenceNode = xmlDoc.SelectSingleNode("/my:myFields/my:ReferenceNumber", ns); referenceNode.InnerXml = this.bookingData.ReferenceNumber; XmlNode titleNode = xmlDoc.SelectSingleNode("/my:myFields/my:Title", ns); titleNode.InnerXml = this.bookingData.FamilyName; ... The Xml: <?xml version="1.0" encoding="UTF-8" ?> <?mso-infoPathSolution name="urn:schemas-microsoft-com:office:infopath:Inspection:-myXSD-2010-01-15T18-21-55" solutionVersion="1.0.0.104" productVersion="12.0.0" PIVersion="1.0.0.0" ?> <?mso-application progid="InfoPath.Document" versionProgid="InfoPath.Document.2"?> - <my:myFields xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xhtml="http://www.w3.org/1999/xhtml" xmlns:my="http://schemas.microsoft.com/office/infopath/2003/myXSD/2010-01-15T18:21:55" xmlns:xd="http://schemas.microsoft.com/office/infopath/2003"> <my:DateRequested xsi:nil="true" /> <my:DateVisited xsi:nil="true" /> <my:ReferenceNumber /> <my:FireCall>false</my:FireCall> ...

    Read the article

  • ImageMagick: convert png fail via PHP and works via bash shell

    - by wedix
    I've got a very weird bug which I've yet to find a solution. UPDATE see solution below What I am trying to do is convert a full size picture into a 160x120 thumbnail. It works great with jpg and jpeg files of any size, but not with png. ImageMagick command: /opt/local/bin/convert '/WEBSERVER/images/img_0003-192-10.png' -thumbnail x320 -resize '320x<' -resize 50% -gravity center -crop 160x120+0+0 +repage -quality 91 '/WEBSERVER/thumbs/small_img_0003-192-10.png' PHP function (shortened) ... $cmd = "/opt/local/bin/convert '/WEBSERVER/images/img_0003-192-10.png' -thumbnail x320 -resize '320x<' -resize 50% -gravity center -crop 160x120+0+0 +repage -quality 91 '/WEBSERVER/thumbs/small_img_0003-192-10.png'"; exec($cmd, $output, $retval); $errors += $retval; if ($errors > 0) { die(print_r($output)); } When this function runs $retval equal 1 which means the convert command failed (thumbnail isn't created). This is where it gets interesting, if I run the exact same command in my shell, it works. wedbook:~ wedix$ /opt/local/bin/convert '/WEBSERVER/images/img_0003-192-10.png' -thumbnail x320 -resize '320x<' -resize 50% -gravity center -crop 160x120+0+0 +repage -quality 91 '/WEBSERVER/thumbs/small_img_0003-192-10.png' wedbook:~ wedix$ I've tried using different PHP function such as system, passthru but it didn't work. I thought maybe someone here knew the solution. I'm using MAMP 1.7.2 Apache/2.0.59 PHP/5.2.6 Thanks! UPDATE I updated the following dependencies libpng from 1.2.35 to 1.2.37 libiconv from 1.12_2 to 1.13_0 ImageMagick 6.5.2-4_1 to 6.5.2-9_0 However, it did not fix my problem. 2nd UPDATE I finally found something that might help, when the function runs this is what gets printed in the Apache logs: dyld: Library not loaded: /opt/local/lib/libiconv.2.dylib Referenced from: /opt/local/bin/convert Reason: Incompatible library version: convert requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 3rd UPDATE libiconv.2.dylib is version 8.0.0... bash-3.2$ otool -L /opt/local/lib/libiconv.2.dylib /opt/local/lib/libiconv.2.dylib: /opt/local/lib/libiconv.2.dylib (compatibility version 8.0.0, current version 8.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 111.1.4) 4th UPDATE Problem was related to MAMP, see solution below

    Read the article

  • Jboss logging issue

    - by balaji
    I'm Working as deployer and server administrator. We use Jboss 4.0x AS to deploy our applications. The issue I'm facing is, Whenever we redeploy/restart the server, server.log is getting created but after sometime the logging goes off. Yes it is not at all updating the server.log file. Due to this, we could not trace the other critical issues we have. Actually we have two separate nodes and we do deploy/restarting the server separately on two nodes. We are facing the issue in both of our test and production environment. I could not trace out where exactly the issue is. Could you please help me in resolving the issue? If we have any other issues, we can check the log files. If log itself is not getting updated/logged, how can we move further in analyzing the issues without the recent/updated logs? Below are the logs found in the stdout.log: 18:55:50,303 INFO [Server] Core system initialized 18:55:52,296 INFO [WebService] Using RMI server codebase: http://kl121tez.is.klmcorp.net:8083/ 18:55:52,313 INFO [Log4jService$URLWatchTimerTask] Configuring from URL: resource:log4j.xml 18:55:52,860 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasRBPFTraceLogger without a class name. 18:55:52,860 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasRBPFMessageLogger without a class name. 18:55:54,273 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasCacheTraceLogger without a class name. 18:55:54,274 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasCacheMessageLogger without a class name. 18:55:54,334 ERROR [STDERR] LOG0026E The Log Manager cannot create the object JACCTraceLogger without a class name. 18:55:54,334 ERROR [STDERR] LOG0026E The Log Manager cannot create the object JACCMessageLogger without a class name. 18:55:56,059 INFO [ServiceEndpointManager] WebServices: jbossws-1.0.3.SP1 (date=200609291417) 18:55:56,635 INFO [Embedded] Catalina naming disabled 18:55:56,671 INFO [ClusterRuleSetFactory] Unable to find a cluster rule set in the classpath. Will load the default rule set. 18:55:56,672 INFO [ClusterRuleSetFactory] Unable to find a cluster rule set in the classpath. Will load the default rule set. 18:55:56,843 INFO [Http11BaseProtocol] Initializing Coyote HTTP/1.1 on http-0.0.0.0-8180 18:55:56,844 INFO [Catalina] Initialization processed in 172 ms 18:55:56,844 INFO [StandardService] Starting service jboss.web

    Read the article

  • Log4net duplicate logging entires

    - by user210713
    I recently switched out log4net logging from using config files to being set up programmatically. This has resulted in the nhiberate entries getting repeated 2 or sometimes 3 times. Here's the code. It uses a string which looks something like this "logger1|debug,logger2|info" private void SetupLog4netLoggers() { IAppender appender = GetAppender(); SetupRootLogger(appender); foreach (string logger in Loggers) { CommaStringList parts = new CommaStringList(logger, '|'); if (parts.Count != 2) continue; AddLogger(parts[0], parts[1], appender); } log.Debug("Log4net has been setup"); } private IAppender GetAppender() { RollingFileAppender appender = new RollingFileAppender(); appender.File = LogFile; appender.AppendToFile = true; appender.MaximumFileSize = MaximumFileSize; appender.MaxSizeRollBackups = MaximumBackups; PatternLayout layout = new PatternLayout(PATTERN); layout.ActivateOptions(); appender.Layout = layout; appender.ActivateOptions(); return appender; } private void SetupRootLogger(IAppender appender) { Hierarchy hierarchy = (Hierarchy)LogManager.GetRepository(); hierarchy.Root.RemoveAllAppenders(); hierarchy.Root.AddAppender(appender); hierarchy.Root.Level = GetLevel(RootLevel); hierarchy.Configured = true; log.Debug("Root logger setup, level[" + RootLevel + "]"); } private void AddLogger(string name, string level, IAppender appender) { Logger logger = LogManager.GetRepository().GetLogger(name)as Logger; if (logger == null) return; logger.Level = GetLevel(level); logger.Additivity = false; logger.RemoveAllAppenders(); logger.AddAppender(appender); log.Debug("logger[" + name + "] added, level[" + level + "]"); } And here's an example of what we see in our logs... 2010-05-06 15:50:39,781 [1] DEBUG NHibernate.Impl.SessionImpl - running ISession.Dispose() 2010-05-06 15:50:39,781 [1] DEBUG NHibernate.Impl.SessionImpl - closing session 2010-05-06 15:50:39,781 [1] DEBUG NHibernate.AdoNet.AbstractBatcher - running BatcherImpl.Dispose(true) 2010-05-06 15:50:39,796 [1] DEBUG NHibernate.Impl.SessionImpl - running ISession.Dispose() 2010-05-06 15:50:39,796 [1] DEBUG NHibernate.Impl.SessionImpl - closing session 2010-05-06 15:50:39,796 [1] DEBUG NHibernate.AdoNet.AbstractBatcher - running BatcherImpl.Dispose(true) 2010-05-06 15:50:39,796 [1] DEBUG NHibernate.Impl.SessionImpl - running ISession.Dispose() 2010-05-06 15:50:39,796 [1] DEBUG NHibernate.Impl.SessionImpl - closing session 2010-05-06 15:50:39,796 [1] DEBUG NHibernate.AdoNet.AbstractBatcher - running BatcherImpl.Dispose(true) Any hints welcome.

    Read the article

  • Adding a generic image field onto a ModelForm in django

    - by Prairiedogg
    I have two models, Room and Image. Image is a generic model that can tack onto any other model. I want to give users a form to upload an image when they post information about a room. I've written code that works, but I'm afraid I've done it the hard way, and specifically in a way that violates DRY. Was hoping someone who's a little more familiar with django forms could point out where I've gone wrong. Update: I've tried to clarify why I chose this design in comments to the current answers. To summarize: I didn't simply put an ImageField on the Room model because I wanted more than one image associated with the Room model. I chose a generic Image model because I wanted to add images to several different models. The alternatives I considered were were multiple foreign keys on a single Image class, which seemed messy, or multiple Image classes, which I thought would clutter my schema. I didn't make this clear in my first post, so sorry about that. Seeing as none of the answers so far has addressed how to make this a little more DRY I did come up with my own solution which was to add the upload path as a class attribute on the image model and reference that every time it's needed. # Models class Image(models.Model): content_type = models.ForeignKey(ContentType) object_id = models.PositiveIntegerField() content_object = generic.GenericForeignKey('content_type', 'object_id') image = models.ImageField(_('Image'), height_field='', width_field='', upload_to='uploads/images', max_length=200) class Room(models.Model): name = models.CharField(max_length=50) image_set = generic.GenericRelation('Image') # The form class AddRoomForm(forms.ModelForm): image_1 = forms.ImageField() class Meta: model = Room # The view def handle_uploaded_file(f): # DRY violation, I've already specified the upload path in the image model upload_suffix = join('uploads/images', f.name) upload_path = join(settings.MEDIA_ROOT, upload_suffix) destination = open(upload_path, 'wb+') for chunk in f.chunks(): destination.write(chunk) destination.close() return upload_suffix def add_room(request, apartment_id, form_class=AddRoomForm, template='apartments/add_room.html'): apartment = Apartment.objects.get(id=apartment_id) if request.method == 'POST': form = form_class(request.POST, request.FILES) if form.is_valid(): room = form.save() image_1 = form.cleaned_data['image_1'] # Instead of writing a special function to handle the image, # shouldn't I just be able to pass it straight into Image.objects.create # ...but it doesn't seem to work for some reason, wrong syntax perhaps? upload_path = handle_uploaded_file(image_1) image = Image.objects.create(content_object=room, image=upload_path) return HttpResponseRedirect(room.get_absolute_url()) else: form = form_class() context = {'form': form, } return direct_to_template(request, template, extra_context=context)

    Read the article

  • Structs, strtok, segmentation fault

    - by FILIaS
    I'm trying to make a programme with structs and files.The following is just a part of my code(it;s not all). What i'm trying to do is: ask the user to write his command. eg. delete John eg. enter John James 5000 ipad purchase. The problem is that I want to split the command in order to save its 'args' for a struct element. That's why i used strtok. BUT I'm facing another problem in who to 'put' these on the struct. #include <stdio.h> #include <stdlib.h> #include <string.h> #define MAX 100 char command[1500]; struct catalogue { char short_name[50]; char surname[50]; signed int amount; char description[1000]; }*catalog[MAX]; int main ( int argc, char *argv[] ) { int i,n; char choice[3]; printf(">sort1: Print savings sorted by surname\n"); printf(">sort2: Print savings sorted by amount\n"); printf(">search+name:Print savings of each name searched\n"); printf(">delete+full_name+amount: Erase saving\n"); printf(">enter+full_name+amount+description: Enter saving \n"); printf(">quit: Update + EXIT program.\n"); printf("Choose your selection:\n>"); gets(command); //it save the whole command /*in choice it;s saved only the first 2 letters(needed for menu choice again)*/ strncpy(choice,command,2); choice[2]='\0'; char** args = (char**)malloc(strlen(command)*sizeof(char*)); memset(args, 0, sizeof(char*)*strlen(command)); char* curToken = strtok(command, " \t"); for (n = 0; curToken != NULL; ++n) { args[n] = strdup(curToken); curToken = strtok(NULL, " \t"); *catalog[n]->short_name=*args[1]; *catalog[n]->surname=args[2]; catalog[n]->amount=atoi(args[3]); *catalog[n]->description=args[4]; } return 0; } I get a warning (warning: assignment makes integer from pointer without a cast) for the lines: *catalog[n]->short_name=*args[1]; *catalog[n]->surname=args[2]; *catalog[n]->description=args[4]; As a result, after running the program i get a Segmentation Fault... Any help? Any ideas?

    Read the article

  • Can ASM method-visitors be used with interfaces?

    - by Olaf Mertens
    I need to write a tool that lists the classes that call methods of specified interfaces. It will be used as part of the build process of a large java application consisting of many modules. The goal is to automatically document the dependencies between certain java modules. I found several tools for dependency analysis, but they don't work on the method level, just for packages or jars. Finally I found ASM, that seems to do what I need. The following code prints the method dependencies of all class files in a given directory: import java.io.*; import java.util.*; import org.objectweb.asm.ClassReader; public class Test { public static void main(String[] args) throws Exception { File dir = new File(args[0]); List<File> classFiles = new LinkedList<File>(); findClassFiles(classFiles, dir); for (File classFile : classFiles) { InputStream input = new FileInputStream(classFile); new ClassReader(input).accept(new MyClassVisitor(), 0); input.close(); } } private static void findClassFiles(List<File> list, File dir) { for (File file : dir.listFiles()) { if (file.isDirectory()) { findClassFiles(list, file); } else if (file.getName().endsWith(".class")) { list.add(file); } } } } import org.objectweb.asm.MethodVisitor; import org.objectweb.asm.commons.EmptyVisitor; public class MyClassVisitor extends EmptyVisitor { private String className; @Override public void visit(int version, int access, String name, String signature, String superName, String[] interfaces) { this.className = name; } @Override public MethodVisitor visitMethod(int access, String name, String desc, String signature, String[] exceptions) { System.out.println(className + "." + name); return new MyMethodVisitor(); } } import org.objectweb.asm.commons.EmptyVisitor; public class MyMethodVisitor extends EmptyVisitor { @Override public void visitMethodInsn(int opcode, String owner, String name, String desc) { String key = owner + "." + name; System.out.println(" " + key); } } The Problem: The code works for regular classes only! If the class file contains an interface, visitMethod is called, but not visitMethodInsn. I don't get any info about the callers of interface methods. Any ideas?

    Read the article

  • Building an Infrastructure Cloud with Oracle VM for x86 + Enterprise Manager 12c

    - by Richard Rotter
    Cloud Computing? Everyone is talking about Cloud these days. Everyone is explaining how the cloud will help you to bring your service up and running very fast, secure and with little effort. You can find these kinds of presentations at almost every event around the globe. But what is really behind all this stuff? Is it really so simple? And the answer is: Yes it is! With the Oracle SW Stack it is! In this post, I will try to bring this down to earth, demonstrating how easy it could be to build a cloud infrastructure with Oracle's solution for cloud computing.But let me cover some basics first: How fast can you build a cloud?How elastic is your cloud so you can provide new services on demand? How much effort does it take to monitor and operate your Cloud Infrastructure in order to meet your SLAs?How easy is it to chargeback for your services provided? These are the critical success factors of Cloud Computing. And Oracle has an answer to all those questions. By using Oracle VM for X86 in combination with Enterprise Manager 12c you can build and control your cloud environment very fast and easy. What are the fundamental building blocks for your cloud? Oracle Cloud Building Blocks #1 Hardware Surprise, surprise. Even the cloud needs to run somewhere, hence you will need hardware. This HW normally consists of servers, storage and networking. But Oracles goes beyond that. There are Optimized Solutions available for your cloud infrastructure. This is a cookbook to build your HW cloud platform. For example, building your cloud infrastructure with blades and our network infrastructure will reduce complexity in your datacenter (Blades with switch network modules, splitter cables to reduce the amount of cables, TOR (Top Of the Rack) switches which are building the interface to your infrastructure environment. Reducing complexity even in the cabling will help you to manage your environment more efficient and with less risk. Of course, our engineered systems fit into the cloud perfectly too. Although they are considered as a PaaS themselves, having the database SW (for Exadata) and the application development environment (for Exalogic) already deployed on them, in general they are ideal systems to enable you building your own cloud and PaaS infrastructure. #2 Virtualization The next missing link in the cloud setup is virtualization. For me personally, it's one of the most hidden "secret", that oracle can provide you with a complete virtualization stack in terms of a hypervisor on both architectures: X86 and Sparc CPUs. There is Oracle VM for X86 and Oracle VM for Sparc available at no additional  license costs if your are running this virtualization stack on top of Oracle HW (and with Oracle Premier Support for HW). This completes the virtualization portfolio together with Solaris Zones introduced already with Solaris 10 a few years ago. Let me explain how Oracle VM for X86 works: Oracle VM for x86 consists of two main parts: - The Oracle VM Server: Oracle VM Server is installed on bare metal and it is the hypervisor which is able to run virtual machines. It has a very small footprint. The ISO-Image of Oracle VM Server is only 200MB large. It is very small but efficient. You can install a OVM-Server in less than 5 mins by booting the Server with the ISO-Image assigned and providing the necessary configuration parameters (like installing an Linux distribution). After the installation, the OVM-Server is ready to use. That's all. - The Oracle VM-Manager: OVM-Manager is the central management tool where you can control your OVM-Servers. OVM-Manager provides the graphical user interface, which is an Application Development Framework (ADF) application, with a familiar web-browser based interface, to manage Oracle VM Servers, virtual machines, and resources. The Oracle VM Manager has the following capabilities: Create virtual machines Create server pools Power on and off virtual machines Manage networks and storage Import virtual machines, ISO files, and templates Manage high availability of Oracle VM Servers, server pools, and virtual machines Perform live migration of virtual machines I want to highlight one of the goodies which you can use if you are running Oracle VM for X86: Preconfigured, downloadable Virtual Machine Templates form edelivery With these templates, you can download completely preconfigured Virtual Machines in your environment, boot them up, configure them at first time boot and use it. There are templates for almost all Oracle SW and Applications (like Fusion Middleware, Database, Siebel, etc.) available. #3) Cloud Management The management of your cloud infrastructure is key. This is a day-to-day job. Acquiring HW, installing a virtualization layer on top of it is done just at the beginning and if you want to expand your infrastructure. But managing your cloud, keeping it up and running, deploying new services, changing your chargeback model, etc, these are the daily jobs. These jobs must be simple, secure and easy to manage. The Enterprise Manager 12c Cloud provides this functionality from one management cockpit. Enterprise Manager 12c uses Oracle VM Manager to control OVM Serverpools. Once you registered your OVM-Managers in Enterprise Manager, then you are able to setup your cloud infrastructure and manage everything from Enterprise Manager. What you need to do in EM12c is: ">Register your OVM Manager in Enterprise ManagerAfter Registering your OVM Manager, all the functionality of Oracle VM for X86 is also available in Enterprise Manager. Enterprise Manager works as a "Manger" of the Manager. You can register as many OVM-Managers you want and control your complete virtualization environment Create Roles and Users for your Self Service Portal in Enterprise ManagerWith this step you allow users to logon on the Enterprise Manager Self Service Portal. Users can request Virtual Machines in this portal. Setup the Cloud InfrastructureSetup the Quotas for your self service users. How many VMs can they request? How much of your resources ( cpu, memory, storage, network, etc. etc.)? Which SW components (templates, assemblys) can your self service users request? In this step, you basically set up the complete cloud infrastructure. Setup ChargebackOnce your cloud is set up, you need to configure your chargeback mechanism. The Enterprise Manager collects the resources metrics, which are used in a very deep level. Almost all collected Metrics could be used in the chargeback module. You can define chargeback plans based on configurations (charge for the amount of cpu, memory, storage is assigned to a machine, or for a specific OS which is installed) or chargeback on resource consumption (% of cpu used, storage used, etc). Or you can also define a combination of configuration and consumption chargeback plans. The chargeback module is very flexible. Here is a overview of the workflow how to handle infrastructure cloud in EM: Summary As you can see, setting up an Infrastructure Cloud Service with Oracle VM for X86 and Enterprise Manager 12c is really simple. I personally configured a complete cloud environment with three X86 servers and a small JBOD san box in less than 3 hours. There is no magic in it, it is all straightforward. Of course, you have to have some experience with Oracle VM and Enterprise Manager. Experience in setting up Linux environments helps as well. I plan to publish a technical cookbook in the next few weeks. I hope you found this post useful and will see you again here on our blog. Any hints, comments are welcome!

    Read the article

  • Getting XML data from a external page and parsing it with PHP

    - by James P
    I'm trying to create a database of World of Warcraft gems. If I go to this page: http://www.wowarmory.com/search.xml?fl[source]=all&fl[type]=gems&fl[subTp]=purple&searchType=items And go to View Source in Firefox, I see a tonne of XML data which is exactly what I want. I wrote up this quick script to try and parse some of it: <?php $gemUrls = array( 'Blue' => 'http://www.wowarmory.com/search.xml?fl[source]=all&fl[type]=gems&fl[subTp]=blue&searchType=items', 'Red' => 'http://www.wowarmory.com/search.xml?fl[source]=all&fl[type]=gems&fl[subTp]=red&searchType=items', 'Yellow' => 'http://www.wowarmory.com/search.xml?fl[source]=all&fl[type]=gems&fl[subTp]=yellow&searchType=items', 'Meta' => 'http://www.wowarmory.com/search.xml?fl[source]=all&fl[type]=gems&fl[subTp]=meta&searchType=items', 'Green' => 'http://www.wowarmory.com/search.xml?fl[source]=all&fl[type]=gems&fl[subTp]=green&searchType=items', 'Orange' => 'http://www.wowarmory.com/search.xml?fl[source]=all&fl[type]=gems&fl[subTp]=orange&searchType=items', 'Purple' => 'http://www.wowarmory.com/search.xml?fl[source]=all&fl[type]=gems&fl[subTp]=purple&searchType=items', 'Prismatic' => 'http://www.wowarmory.com/search.xml?fl[source]=all&fl[type]=gems&fl[subTp]=purple&searchType=items' ); // Get blue gems $blueGems = file_get_contents($gemUrls['Blue']); $xml = new SimpleXMLElement($blueGems); echo $xml->items[0]->item; ?> But I get a load of errors like this: Warning: SimpleXMLElement::__construct() [simplexmlelement.--construct]: Entity: line 20: parser error : xmlParseEntityRef: no name in C:\xampp\htdocs\WoW\index.php on line 19 Warning: SimpleXMLElement::__construct() [simplexmlelement.--construct]: if(Browser.iphone && Number(getcookie2("mobIntPageVisits")) < 3 && getcookie2( in C:\xampp\htdocs\WoW\index.php on line 19 I'm not sure what's wrong. I think file_get_contents() is bringing back data that isn't XML, maybe some Javascript files judging by the iPhone parts in the errors. Is there any way to just get back the XML from that page? Without any HTML or anything? Thanks :)

    Read the article

  • Strange iPhone memory leak in xml parser

    - by Chris
    Update: I edited the code, but the problem persists... Hi everyone, this is my first post here - I found this place a great ressource for solving many of my questions. Normally I try my best to fix anything on my own but this time I really have no idea what goes wrong, so I hope someone can help me out. I am building an iPhone app that parses a couple of xml files using TouchXML. I have a class XMLParser, which takes care of downloading and parsing the results. I am getting memory leaks when I parse an xml file more than once with the same instance of XMLParser. Here is one of the parsing snippets (just the relevant part): for(int counter = 0; counter < [item childCount]; counter++) { CXMLNode *child = [item childAtIndex:counter]; if([[child name] isEqualToString:@"PRODUCT"]) { NSMutableDictionary *product = [[NSMutableDictionary alloc] init]; for(int j = 0; j < [child childCount]; j++) { CXMLNode *grandchild = [child childAtIndex:j]; if([[grandchild stringValue] length] > 1) { NSString *trimmedString = [[grandchild stringValue] stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]; [product setObject:trimmedString forKey:[grandchild name]]; } } // Add product to current category array switch (categoryId) { case 0: [self.mobil addObject: product]; break; case 1: [self.allgemein addObject: product]; break; case 2: [self.besitzeIch addObject: product]; break; case 3: [self.willIch addObject: product]; break; default: break; } [product release]; } } The first time, I parse the xml no leak shows up in instruments, the next time I do so, I got a lot of leaks (NSCFString / NSCFDictionary). Instruments points me to this part inside CXMLNode.m, when I dig into a leaked object: theStringValue = [NSString stringWithUTF8String:(const char *)theXMLString]; if ( _node->type != CXMLTextKind ) xmlFree(theXMLString); } return(theStringValue); I really spent a long time and tried multiple approaches to fix this, but to no avail so far, maybe I am missing something essential? Any help is highly appreciated, thank you!

    Read the article

  • compiler warning at C++ template base class

    - by eike
    I get a compiler warning, that I don't understand in that context, when I compile the "Child.cpp" from the following code. (Don't wonder: I stripped off my class declarations to the bare minuum, so the content will not make much sense, but you will see the problem quicker). I get the warning with VS2003 and VS2008 on the highest warning level. The code AbstractClass.h : #include <iostream> template<typename T> class AbstractClass { public: virtual void Cancel(); // { std::cout << "Abstract Cancel" << std::endl; }; virtual void Process() = 0; }; //outside definition. if I comment out this and take the inline //definition like above (currently commented out), I don't get //a compiler warning template<typename T> void AbstractClass<T>::Cancel() { std::cout << "Abstract Cancel" << std::endl; } Child.h : #include "AbstractClass.h" class Child : public AbstractClass<int> { public: virtual void Process(); }; Child.cpp : #include "Child.h" #include <iostream> void Child::Process() { std::cout << "Process" << std::endl; } The warning The class "Child" is derived from "AbstractClass". In "AbstractClass" there's the public method "AbstractClass::Cancel()". If I define the method outside of the class body (like in the code you see), I get the compiler warning... AbstractClass.h(7) : warning C4505: 'AbstractClass::Cancel' : unreferenced local function has been removed with [T=int] ...when I compile "Child.cpp". I do not understand this, because this is a public function and the compiler can't know if I later reference this method or not. And, in the end, I reference this method, because I call it in main.cpp and despite this compiler warning, this method works if I compile and link all files and execute the program: //main.cpp #include <iostream> #include "Child.h" int main() { Child child; child.Cancel(); //works, despite the warning } If I do define the Cancel() function as inline (you see it as out commented code in AbstractClass.h), then I don't get the compiler warning. Of course my program works, but I want to understand this warning or is this just a compiler mistake? Furthermore, if do not implement AbsctractClass as a template class (just for a test purpose in this case) I also don't get the compiler warning...?

    Read the article

  • Day 3 - XNA: Hacking around with images

    - by dapostolov
    Yay! Today I'm going to get into some code! My mind has been on this all day! I find it amusing how I practice, daily, to be "in the moment" or "present" and the excitement and anticipation of this project seems to snatch it away from me frequently. WELL!!! (Shakes Excitedly) Let's do this =)! Let's code! For these next few days it is my intention to better understand image rendering using XNA; after said prototypes are complete I should (fingers crossed) be able to dive into my game code using the design document I hammered out the other night. On a personal note, I think the toughest thing right now is finding the time to do this project. Each night, after my little ones go to bed I can only really afford a couple hours of work on this project. However, I hope to utilise this time as best as I can because this is the first time in a while I've found a project that I've been passionate about. A friend recently asked me if I intend to go 3D or extend the game design. Yes. For now I'm keeping it simple. Lastly, just as a note, as I was doing some further research into image rendering this morning I came across some other XNA content and lessons learned. I believe this content could have probably been posted in the first couple of posts, however, I will share the new content as I learn it at the end of each day. Maybe I'll take some time later to fix the posts but for now Installation and Deployment - Lessons Learned I had installed the XNA studio  (Day 1) and the site instructions were pretty easy to follow. However, I had a small difficulty with my development environment. You see, I run a virtual desktop development environment. Even though I was able to code and compile all the tutorials the game failed to run...because I lacked a 3D capable card; it was not detected on the virtual box... First Lesson: The XNA runtime needs to "see" the 3D card! No sweat, Il copied the files over to my parent box and executed the program. ERROR. Hmm... Second Lesson (which I should have probably known but I let the excitement get the better of me): you need the XNA runtime on the client PC to run the game, oh, and don't forget the .Net Runtime! Sprite, it ain't just a Soft Drink... With these prototypes I intend to understand and perform the following tasks. learn game development terminology how to place and position (rotate) a static image on the screen how to layer static images on the screen understand image scaling can we reuse images? understand how framerate is handled in XNA how to display text , basic shapes, and colors on the screen how to interact with an image (collision of user input?) how to animate an image and understand basic animation techniques how to detect colliding images or screen edges how to manipulate the image, lets say colors, stretching how to focus on a segment of an image...like only displaying a frame on a film reel what's the best way to manage images (compression, storage, location, prevent artwork theft, etc.) Well, let's start with this "prototype" task list for now...Today, let's get an image on the screen and maybe I can mark a few of the tasks as completed... C# Prototype1 New Visual Studio Project Select the XNA Game Studio 3.1 Project Type Select the Windows Game 3.1 Template Type Prototype1 in the Name textbox provided Press OK. At this point code has auto-magically been created. Feel free to press the F5 key to run your first XNA program. You should have a blue screen infront of you. Without getting into the nitty gritty right, the code that was generated basically creates some basic code to clear the window content with the lovely CornFlowerBlue color. Something to notice, when you move your mouse into the window...nothing. ooooo spoooky. Let's put an image on that screen! Step A - Get an Image into the solution Under "Content" in your Solution Explorer, right click and add a new folder and name it "Sprites". Copy a small image in there; I copied a "Royalty Free" wizard hat from a quick google search and named it wizards_hat.jpg (rightfully so!) Step B - Add the sprite and position fields Now, open/edit  Game1.cs Locate the following line:  SpriteBatch spriteBatch; Under this line type the following:         SpriteBatch spriteBatch; // the line you are looking for...         Texture2D sprite;         Vector2 position; Step C - Load the image asset Locate the "Load Content" Method and duplicate the following:             protected override void LoadContent()         {             spriteBatch = new SpriteBatch(GraphicsDevice);             // your image name goes here...             sprite = Content.Load<Texture2D>("Sprites\\wizards_hat");             position = new Vector2(200, 100);             base.LoadContent();         } Step D - Draw the image Locate the "Draw" Method and duplicate the following:        protected override void Draw(GameTime gameTime)         {             GraphicsDevice.Clear(Color.CornflowerBlue);             spriteBatch.Begin(SpriteBlendMode.AlphaBlend);             spriteBatch.Draw(sprite, position, Color.White);             spriteBatch.End();             base.Draw(gameTime);         }  Step E - Compile and Run Engage! (F5) - Debug! Your image should now display on a cornflowerblue window about 200 pixels from the left and 100 pixels from the top. Awesome! =) Pretty cool how we only coded a few lines to display an image, but believe me, there is plenty going on behind the scenes. However, for now, I'm going to call it a night here. Blogging all this progress certainly takes time... However, tomorrow night I'm going to detail what we just did, plus start checking off points on that list! I'm wondering right now if I should add pictures / code to this post...let me know if you want them =) Best Regards, D.

    Read the article

  • NHibernate Mapping problem

    - by Bernard Larouche
    My database is being driven by my NHibernate mapping files. I have a Category class that looks like the following : public class Category { public Category() : this("") { } public Category(string name) { Name = name; SubCategories = new List<Category>(); Products = new HashSet<Product>(); } public virtual int ID { get; set; } public virtual string Name { get; set; } public virtual string Description { get; set; } public virtual Category Parent { get; set; } public virtual bool IsDefault { get; set; } public virtual ICollection<Category> SubCategories { get; set; } public virtual ICollection<Product> Products { get; set; } and here is my Mapping file : <property name="Name" column="Name" type="string" not-null="true"/> <property name="IsDefault" column="IsDefault" type="boolean" not-null="true" /> <property name="Description" column="Description" type="string" not-null="true" /> <many-to-one name="Parent" column="ParentID"></many-to-one> <bag name="SubCategories" inverse="true"> <key column="ParentID"></key> <one-to-many class="Category"/> </bag> <set name="Products" table="Categories_Products"> <key column="CategoryId"></key> <many-to-many column="ProductId" class="Product"></many-to-many> </set> when I try to create the database I get the following error : failed: The INSERT statement conflicted with the FOREIGN KEY SAME TABLE constraint "FK9AD976763BF05E2A". The conflict occurred in database "CoderForTraders", table "dbo.Categories", column 'CategoryId'. The statement has been terminated. I looked on the net for some answers but found none. Thanks for your help

    Read the article

  • Default Database Collations PenTesting Env

    - by dominicdinada
    I am using Ubuntu 9.10 with XAMPP ( Lampp "MYSQL 5.1.45 PHPMYADMIN 3.3.1 PHP 5.3.2 ) What my problem is, is that I set up my testing env to debug my scripts locally and when I did so there arose a problem. This problem is that I used firefox's addon SQLinject ME to test for weakness' and upon doing so it caused mysql to change the default local collations; character sets dir /opt/lampp/share/mysql/charsets/ collation connection latin1_general_ci (Global value) latin1_swedish_ci collation database latin1_swedish_ci collation server latin1_swedish_ci I have searched for quite sometime in regards to a solution to this problem and have come up with searching for the db.opt file which stores this information without success. Upon not finding a solution I removed lampp with the "sudo rm -fR /opt" command and reinstall and the problem still persists. I have tried to change the collations manually and still come up with the database displaying latin1_swedish_ci as the default language. Why is this a problem?? Why is it a problem with mysql? Because the application I am testing and debugging locally is built on the CodeIgnitor with Smarty framework and since this combination of framework is built to detect the LOCALES, Rather what the database defaults are I keep getting errors saying no language file for swedish...... Of course I could get the swedish language file to work around this problem but I do not feel the need to make this work around a perminant solution as with time when I move on to projects I will run into simular problems every time that A; When importing database files, backups etc it will default to import such databases as the locale swedish. B; As time passes on I might completly forget of this error and will be back to square one. I have found this code in searches for a fix,which seems to alter the tables to a desired Collaion; $value) { mysql_query("ALTER TABLE $value COLLATE latin1_general_ci"); }} echo "The collation of your database has been successfully changed!"; ? Which is handy to switch collations in One Schema at a time however this is not a fix when a framework doesnt care that the said database is in one langugae. It tests for the Default of the entire server. Someone with any knowledge of a purge or fix to this I would greatly appricate the help. One more final note is that when I was testing I only figured to back up the applications DataBase and not the entire Schema of the install. No matter if I uninstall or reinstall the database still seems to carry these problems.

    Read the article

  • mysql_connect() causes page to not display (WAMP)

    - by VOIDHand
    I currently have a website running MySQL and PHP in which this is all working. I've tried installing WAMPServer to be able to work on the site on my own computer, but I have been having issues trying to get the site to work correctly. HTML and PHP files work correctly (by going to http://localhost/index.php, etc.). But some of the pages display "This Webpage is not available" (in Chrome) or "Internet Explorer cannot display this webpage", which leads me to think there is an error is the server, as when the page doesn't exist, it typically dispays "Oops! This link appears to be broken" or IE's standard 404 page. Each page in my site has an include() call to set up an instance of a class to handle all database transactions. This class opens the connection to the database in its constructor. I have found commenting out the contents of the constructor will allow the page to load, although this understandably causes errors later in the page. This is the contents of the included file: class dbAccess { private $db; function __construct() { $this->db = mysql_connect("externalURL","user","password") or die ("Connection for database not found."); mysql_select_db("dbName", $this->db) or die ("Database not selected"); } ... } As a note for what's happening here. I'm attempting to use the database on my site, rather than the local machine. The actual values of externalURL, dbName, user, and password work when I plug them into my database software and the file fine of the site, so I don't think the issue is with those values themselves, but rather some setting with Apache or Wamp. The issue persists if I try a local database as well. This is an excerpt of my Apache error log for one attempt at logging into the site: [Wed Apr 14 15:32:54 2010] [notice] Parent: child process exited with status 255 -- Restarting. [Wed Apr 14 15:32:54 2010] [notice] Apache/2.2.11 (Win32) PHP/5.3.0 configured -- resuming normal operations [Wed Apr 14 15:32:54 2010] [notice] Server built: Dec 10 2008 00:10:06 [Wed Apr 14 15:32:54 2010] [notice] Parent: Created child process 1756 [Wed Apr 14 15:32:55 2010] [notice] Child 1756: Child process is running [Wed Apr 14 15:32:55 2010] [notice] Child 1756: Acquired the start mutex. [Wed Apr 14 15:32:55 2010] [notice] Child 1756: Starting 64 worker threads. [Wed Apr 14 15:32:55 2010] [notice] Child 1756: Starting thread to listen on port 80. For the first line, isLoggedIn (from the first line) is a method of the class above. Assuming that the connection works, this error should disappear. I've tried searching for solutions online, but haven't been able to find anything to help. If you need any further information to help solve this issue, feel free to ask for it. (One more note, I don't even have Skype on this computer, so I can't see it being an issue, as this conflict seems to be the default response for any Wamp issue.) [Edit: Removed entry from the error log as it was solved as an unrelated issue]

    Read the article

  • Chrome and Firefox not able to find some sites after Ubuntu's recent update?

    - by gkr
    Loading of revision3.com in Chrome stops and status saying "waiting for static.inplay.tubemogul.com" grooveshark.com in Chrome never loads But wikipedia.org, google.com works just normal same behavior in Firefox too. I use wired DSL connection in Ubuntu 12.04. I guess this started happening after I upgraded the Chrome browser or Flash plugin few days ago from Ubuntu updates through update-manager. Thanks EDIT: Everything works normal in my WinXP. Problem is only in Ubuntu 12.04. This is what I get when I use wget gkr@gkr-desktop:~$ wget revision3.com --2012-06-12 08:58:01-- http://revision3.com/ Resolving revision3.com (revision3.com)... 173.192.117.198 Connecting to revision3.com (revision3.com)|173.192.117.198|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: `index.html' [ ] 81,046 32.0K/s in 2.5s 2012-06-12 08:58:04 (32.0 KB/s) - `index.html' saved [81046] gkr@gkr-desktop:~$ wget static.inplay.tubemogul.com --2012-06-12 08:51:25-- http://static.inplay.tubemogul.com/ Resolving static.inplay.tubemogul.com (static.inplay.tubemogul.com)... 72.21.81.253 Connecting to static.inplay.tubemogul.com (static.inplay.tubemogul.com)|72.21.81.253|:80... connected. HTTP request sent, awaiting response... 404 Not Found 2012-06-12 08:51:25 ERROR 404: Not Found. grooveshark.com nevers responses so I have to Ctrl+C to terminate wget. gkr@gkr-desktop:~$ wget grooveshark.com --2012-06-12 08:51:33-- http://grooveshark.com/ Resolving grooveshark.com (grooveshark.com)... 8.20.213.76 Connecting to grooveshark.com (grooveshark.com)|8.20.213.76|:80... connected. HTTP request sent, awaiting response... ^C gkr@gkr-desktop:~$ This is the apt term.log of updates I mentioned Log started: 2012-06-09 04:51:45 (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 161392 files and directories currently installed.) Preparing to replace google-chrome-stable 19.0.1084.52-r138391 (using .../google-chrome-stable_19.0.1084.56-r140965_i386.deb) ... Unpacking replacement google-chrome-stable ... Preparing to replace libpulse0 1:1.1-0ubuntu15 (using .../libpulse0_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement libpulse0 ... Preparing to replace libpulse-mainloop-glib0 1:1.1-0ubuntu15 (using .../libpulse-mainloop-glib0_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement libpulse-mainloop-glib0 ... Preparing to replace libpulsedsp 1:1.1-0ubuntu15 (using .../libpulsedsp_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement libpulsedsp ... Preparing to replace sudo 1.8.3p1-1ubuntu3.2 (using .../sudo_1.8.3p1-1ubuntu3.3_i386.deb) ... Unpacking replacement sudo ... Preparing to replace flashplugin-installer 11.2.202.235ubuntu0.12.04.1 (using .../flashplugin-installer_11.2.202.236ubuntu0.12.04.1_i386.deb) ... Unpacking replacement flashplugin-installer ... Preparing to replace pulseaudio-utils 1:1.1-0ubuntu15 (using .../pulseaudio-utils_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement pulseaudio-utils ... Preparing to replace pulseaudio 1:1.1-0ubuntu15 (using .../pulseaudio_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement pulseaudio ... Preparing to replace pulseaudio-module-gconf 1:1.1-0ubuntu15 (using .../pulseaudio-module-gconf_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement pulseaudio-module-gconf ... Preparing to replace pulseaudio-module-x11 1:1.1-0ubuntu15 (using .../pulseaudio-module-x11_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement pulseaudio-module-x11 ... Preparing to replace shared-mime-info 1.0-0ubuntu4 (using .../shared-mime-info_1.0-0ubuntu4.1_i386.deb) ... Unpacking replacement shared-mime-info ... Preparing to replace pulseaudio-module-bluetooth 1:1.1-0ubuntu15 (using .../pulseaudio-module-bluetooth_1%3a1.1-0ubuntu15.1_i386.deb) ... Unpacking replacement pulseaudio-module-bluetooth ... Processing triggers for menu ... /usr/share/menu/downverter.menu: 1: /usr/share/menu/downverter.menu: Syntax error: word unexpected (expecting ")") Processing triggers for man-db ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Processing triggers for update-notifier-common ... flashplugin-installer: downloading http://archive.canonical.com/pool/partner/a/adobe-flashplugin/adobe-flashplugin_11.2.202.236.orig.tar.gz Installing from local file /tmp/tmpNrBt4g.gz Flash Plugin installed. Processing triggers for doc-base ... Processing 1 changed doc-base file... Registering documents with scrollkeeper... Setting up google-chrome-stable (19.0.1084.56-r140965) ... Setting up libpulse0 (1:1.1-0ubuntu15.1) ... Setting up libpulse-mainloop-glib0 (1:1.1-0ubuntu15.1) ... Setting up libpulsedsp (1:1.1-0ubuntu15.1) ... Setting up sudo (1.8.3p1-1ubuntu3.3) ... Installing new version of config file /etc/pam.d/sudo ... Setting up flashplugin-installer (11.2.202.236ubuntu0.12.04.1) ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Setting up pulseaudio-utils (1:1.1-0ubuntu15.1) ... Setting up pulseaudio (1:1.1-0ubuntu15.1) ... Setting up pulseaudio-module-gconf (1:1.1-0ubuntu15.1) ... Setting up pulseaudio-module-x11 (1:1.1-0ubuntu15.1) ... Setting up shared-mime-info (1.0-0ubuntu4.1) ... Setting up pulseaudio-module-bluetooth (1:1.1-0ubuntu15.1) ... Processing triggers for menu ... /usr/share/menu/downverter.menu: 1: /usr/share/menu/downverter.menu: Syntax error: word unexpected (expecting ")") Processing triggers for libc-bin ... ldconfig deferred processing now taking place Log ended: 2012-06-09 04:53:32 This is from history.log Start-Date: 2012-06-09 04:51:45 Commandline: aptdaemon role='role-commit-packages' sender=':1.56' Upgrade: shared-mime-info:i386 (1.0-0ubuntu4, 1.0-0ubuntu4.1), pulseaudio:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), libpulse-mainloop-glib0:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), sudo:i386 (1.8.3p1-1ubuntu3.2, 1.8.3p1-1ubuntu3.3), pulseaudio-module-bluetooth:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), pulseaudio-module-x11:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), flashplugin-installer:i386 (11.2.202.235ubuntu0.12.04.1, 11.2.202.236ubuntu0.12.04.1), pulseaudio-utils:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), pulseaudio-module-gconf:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), libpulse0:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), libpulsedsp:i386 (1.1-0ubuntu15, 1.1-0ubuntu15.1), google-chrome-stable:i386 (19.0.1084.52-r138391, 19.0.1084.56-r140965) End-Date: 2012-06-09 04:53:32

    Read the article

  • CodePlex Daily Summary for Thursday, September 06, 2012

    CodePlex Daily Summary for Thursday, September 06, 2012Popular Releasesmenu4web: menu4web 0.4.1 - javascript menu for web sites: This release is for those who believe that global variables are evil. menu4web has been wrapped into m4w singleton object. Added "Vertical Tabs" example which illustrates object notation.WinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.1: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...iPDC - Free Phasor Data Concentrator: iPDC-v1.3.1: iPDC suite version-1.3.1, Modifications and Bug Fixed (from v 1.3.0) New User Manual for iPDC-v1.3.1 available on websites. Bug resolved : PMU Simulator TCP connection error and hang connection for client (PDC). Now PMU Simulator (server) can communicate more than one PDCs (clients) over TCP and UDP parallely. PMU Simulator is now sending the exact data frames as mentioned in data rate by user. PMU Simulator data rate has been verified by iPDC database entries and PMU Connection Tes...Microsoft SQL Server Product Samples: Database: AdventureWorks OData Feed: The AdventureWorks OData service exposes resources based on specific SQL views. The SQL views are a limited subset of the AdventureWorks database that results in several consuming scenarios: CompanySales Documents ManufacturingInstructions ProductCatalog TerritorySalesDrilldown WorkOrderRouting How to install the sample You can consume the AdventureWorks OData feed from http://services.odata.org/AdventureWorksV3/AdventureWorks.svc. You can also consume the AdventureWorks OData fe...Desktop Google Reader: 1.4.6: Sorting feeds alphabetical is now optional (see preferences window)DotNetNuke® Community Edition CMS: 06.02.03: Major Highlights Fixed issue where mailto: links were not working when sending bulk email Fixed issue where uses did not see friendship relationships Problem is in 6.2, which does not show in the Versions Affected list above. Fixed the issue with cascade deletes in comments in CoreMessaging_Notification Fixed UI issue when using a date fields as a required profile property during user registration Fixed error when running the product in debug mode Fixed visibility issue when...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.65: Fixed null-reference error in the build task constructor.Active Forums for DotNetNuke CMS: Active Forums 5.0.0 RC: RC release of Active Forums 5.0.Droid Explorer: Droid Explorer 0.8.8.7 Beta: Bug in the display icon for apk's, will fix with next release Added fallback icon if unable to get the image/icon from the Cloud Service Removed some stale plugins that were either out dated or incomplete. Added handler for *.ab files for restoring backups Added plugin to create device backups Backups stored in %USERPROFILE%\Android Backups\%DEVICE_ID%\ Added custom folder icon for the android backups directory better error handling for installing an apk bug fixes for the Runn...BI System Monitor: v2.1: Data Audits report and supporting SQL, and SSIS package Environment Overview report enhancements, improving the appearance, addition of data audit finding indicators Note: SQL 2012 version coming soon.Hidden Capture (HC): Hidden Capture 1.1: Hidden Capture 1.1 by Mohsen E.Dawatgar http://Hidden-Capture.blogfa.comExt Spec: Ext Spec 0.2.1: Refined examples and improved distribution options.The Visual Guide for Building Team Foundation Server 2012 Environments: Version 1: --Nearforums - ASP.NET MVC forum engine: Nearforums v8.5: Version 8.5 of Nearforums, the ASP.NET MVC Forum Engine. New features include: Built-in search engine using Lucene.NET Flood control improvements Notifications improvements: sync option and mail body View Roadmap for more details webdeploy package sha1 checksum: 961aff884a9187b6e8a86d68913cdd31f8deaf83WiX Toolset: WiX Toolset v3.6: WiX Toolset v3.6 introduces the Burn bootstrapper/chaining engine and support for Visual Studio 2012 and .NET Framework 4.5. Other minor functionality includes: WixDependencyExtension supports dependency checking among MSI packages. WixFirewallExtension supports more features of Windows Firewall. WixTagExtension supports Software Id Tagging. WixUtilExtension now supports recursive directory deletion. Melt simplifies pure-WiX patching by extracting .msi package content and updating .w...Iveely Search Engine: Iveely Search Engine (0.2.0): ????ISE?0.1.0??,?????,ISE?0.2.0?????????,???????,????????20???follow?ISE,????,??ISE??????????,??????????,?????????,?????????0.2.0??????,??????????。 Iveely Search Engine ?0.2.0?????????“??????????”,??????,?????????,???????,???????????????????,????、????????????。???0.1.0????????????: 1. ??“????” ??。??????????,?????????,???????????????????。??:????????,????????????,??????????????????。??????。 2. ??“????”??。?0.1.0??????,???????,???????????????,?????????????,????????,?0.2.0?,???????...GmailDefaultMaker: GmailDefaultMaker 3.0.0.2: Add QQ Mail BugfixSmart Data Access layer: Smart Data access Layer Ver 3: In this version support executing inline query is added. Check Documentation section for detail.DotNetNuke® Form and List: 06.00.04: DotNetNuke Form and List 06.00.04 Don't forget to backup your installation before upgrade. Changes in 06.00.04 Fix: Sql Scripts for 6.003 missed object qualifiers within stored procedures Fix: added missing resource "cmdCancel.Text" in form.ascx.resx Changes in 06.00.03 Fix: MakeThumbnail was broken if the application pool was configured to .Net 4 Change: Data is now stored in nvarchar(max) instead of ntext Changes in 06.00.02 The scripts are now compatible with SQL Azure, tested in a ne...Coevery - Free CRM: Coevery 1.0.0.24: Add a sample database, and installation instructions.New ProjectsAny-Service: AnyService is a .net 4.0 Windows service shell. It hosts any windows application in non-gui mode to run as a service.BabyCloudDrives - the multi cloud drive desktop's application: wpf ????BLACK ORANGE: Download The HPAD TEXT EDITOR and use it Wisely.. CodePlex New Release Checker: CodePlex New Release Checker is a small library that makes it easy to add, "New Version Available!" functionality to your CodePlex project.Collect: ????????!CSVManager: CSV??CSV?????,????CSV??,??????Exam Project: My Exam Project. Computer Vision, C and OpenCV-FTP: Hey guys thanks for checking out my ftp!Haushaltsbuch: 1ModMaker.Lua: ModMaker.Lua is an open source .NET library that parses and executes Lua code.MyJabbr: MyJabbr netduinoscope: Design shield and software to use netduino as oscilloscopeNetSurveillance Web Application: Net Surveillance Web ApplicationNiconicoApiHelper: ????API?????????OStega: A simple library for encrypt text into an bmp or png image.OURORM: ormTFS Cloud Deployment Toolkit: The TFS Cloud Deployment Toolkit is a set of tools that integrate with TFS 2010 to help manage configuration and deployment to various remote environments.The Visual Guide for Building Team Foundation Server 2012 Environments: A step-by-step guide for building Team Foundation Server 2012 environments that include SharePoint Server 2010, SQL Server 2012, Windows Server 2012 and more!WinRT LineChart: An attempt at creating an usable LineChart for everyone to use in his/her own Windows 8 Apps

    Read the article

  • .NET 2.0 Process Elevation for App Installation

    - by Brian Gillespie
    We have an application written in both C++ and .NET that installs for all users in the Program Files folder. This application downloads new versions of itself (as MSI installers) and spawns the new installer process to replace itself. The install process as it exists today: Copy an install manager app (C#, .NET 2.0) to the temp directory. Call this 'Manager' Manager is executed with elevated privs per this article. The original application exits. Manager spawns the MSI installer (with elevated privs, since the copy is elevated) Manager spawns the new version of the app. The bug: The newly installed app is running in an elevated state. This causes problems I won't enumerate here. Ideally, the launch of the newly installed app would be run with the permissions of the original user. I can't figure out how to demote the app back to being the standard user after elevation. An inelegant hack: (yeah, yeah, this whole process is inelegant anyway) Copy the install manager to the temp directory Run the install manager with standard user privs. Lets call this instance 'LowlyManager'. Original application exits. LowlyManager spawns the app again, this time with elevated privs. Let's name this instance 'UpperManagement' UpperManagement spawns the installer UpperManagement exits gracefully, returning the exit code of the installer. LowlyManager interprets the error code from UpperManagement, and spawns the newly installed application. This time as the original invoker. Is there a better way to do this? (I've left out a bunch of other details before and after these steps that make the process smoother for the user, but this should be enough to understand the core of the problem I'm trying to solve.) Other requirements: We can't install as a per-user app The user shouldn't be presented with an authentication dialog box if UAC would have simply asked "are you sure you want to allow this?". I think this might kill a solution using WindowsImpersonationContext, but I'm not sure. The system needs to work on XP, Vista, and Windows 7 (even if there is a separate process for XP).

    Read the article

  • More GCC link time issues: undefined reference to main

    - by vikramtheone
    Hi Guys, I'm writing software for a Cortex-A8 processor and I have to write some ARM assembly code to access specific registers. I'm making use of the gnu compilers and related tool chains, these tools are installed on the processor board(Freescale i.MX515) with Ubuntu. I make a connection to it from my host PC(Windows) using WinSCP and the PuTTY terminal. As usual I started with a simple C project having main.c and functions.s. I compile the main.c using GCC, assemble the functions.s using as and link the generated object files using once again GCC, but I get strange errors during this process. An important finding - Meanwhile, I found out that my assembly code may have some issues because when I individually assemble it using the command as -o functions.o functions.s and try running the generated functions.o using ./functions.o command, the bash shell is failing to recognize this file as an executable(on pressing tab functions.o is not getting selected/PuTTY is not highlighting the file). Can anyone suggest whats happening here? Are there any specific options I have to send, to GCC during the linking process? The errors I see are strange and beyond my understanding, I don't understand to what the GCC is referring. I'm pasting here the contents of main.c, functions.s, the Makefile and the list of errors. Help, please!!! Vikram main.c #include <stdio.h> #include <stdlib.h> int main(void) { puts("!!!Hello World!!!"); /* prints !!!Hello World!!! */ return EXIT_SUCCESS; } functions.s * Main program */ .equ STACK_TOP, 0x20000800 .text .global _start .syntax unified _start: .word STACK_TOP, start .type start, function start: movs r0, #10 movs r1, #0 .end Makefile all: hello hello: main.o functions.o gcc -o main.o functions.o main.o: main.c gcc -c -mcpu=cortex-a8 main.c functions.o: functions.s as -mcpu=cortex-a8 -o functions.o functions.s Errors ubuntu@ubuntu-desktop:~/Documents/Project/Others/helloworld$ make gcc -c -mcpu=cortex-a8 main.c as -mcpu=cortex-a8 -o functions.o functions.s gcc -o main.o functions.o functions.o: In function `_start': (.text+0x0): multiple definition of `_start' /usr/lib/gcc/arm-linux-gnueabi/4.3.3/../../../crt1.o:init.c:(.text+0x0): first defined here /usr/lib/gcc/arm-linux-gnueabi/4.3.3/../../../crt1.o: In function `_start': init.c:(.text+0x30): undefined reference to `main' collect2: ld returned 1 exit status make: *** [hello] Error 1

    Read the article

< Previous Page | 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583  | Next Page >