Search Results

Search found 56648 results on 2266 pages for 'vaibhav tekamwww developerit com'.

Page 2189/2266 | < Previous Page | 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196  | Next Page >

  • How to Use the Signature Editor in Outlook 2013

    - by Lori Kaufman
    The Signature Editor in Outlook 2013 allows you to create a custom signature from text, graphics, or business cards. We will show you how to use the various features of the Signature Editor to customize your signatures. To open the Signature Editor, click the File tab and select Options on the left side of the Account Information screen. Then, click Mail on the left side of the Options dialog box and click the Signatures button. For more details, refer to one of the articles mentioned above. Changing the font for your signature is pretty self-explanatory. Select the text for which you want to change the font and select the desired font from the drop-down list. You can also set the justification (left, center, right) for each line of text separately. The drop-down list that reads Automatic by default allows you to change the color of the selected text. Click OK to accept your changes and close the Signatures and Stationery dialog box. To see your signature in an email, click Mail on the Navigation Bar. Click New Email on the Home tab. The Message window displays and your default signature is inserted into the body of the email. NOTE: You shouldn’t use fonts that are not common in your signatures. In order for the recipient to see your signature as you intended, the font you choose also needs to be installed on the recipient’s computer. If the font is not installed, the recipient would see a different font, the wrong characters, or even placeholder characters, which are empty square boxes. Close the Message window using the File tab or the X button in the upper, right corner of the Message window. You can save it as a draft if you want, but it’s not necessary. If you decide to use a font that is not common, a better way to do so would be to create a signature as an image, or logo. Create your image or logo in an image editing program making it the exact size you want to use in your signature. Save the image in a file size as small as possible. The .jpg format works well for pictures, the .png format works well for detailed graphics, and the .gif format works well for simple graphics. The .gif format generally produces the smallest files. To insert an image in your signature, open the Signatures and Stationery dialog box again. Either delete the text currently in the editor, if any, or create a new signature. Then, click the image button on the editor’s toolbar. On the Insert Picture dialog box, navigate to the location of your image, select the file, and click Insert. If you want to insert an image from the web, you must enter the full URL for the image in the File name edit box (instead of the local image filename). For example, http://www.somedomain.com/images/signaturepic.gif. If you want to link to the image at the specified URL, you must also select Link to File from the Insert drop-down list to maintain the URL reference. The image is inserted into the Edit signature box. Click OK to accept your changes and close the Signatures and Stationery dialog box. Create a new email message again. You’ll notice the image you inserted into the signature displays in the body of the message. Close the Message window using the File tab or the X button in the upper, right corner of the Message window. You may want to put a link to a webpage or an email link in your signature. To do this, open the Signatures and Stationery dialog box again. Enter the text to display for the link, highlight the text, and click the Hyperlink button on the editor’s toolbar. On the Insert Hyperlink dialog box, select the type of link from the list on the left and enter the webpage, email, or other type of address in the Address edit box. You can change the text that will display in the signature for the link in the Text to display edit box. Click OK to accept your changes and close the dialog box. The link displays in the editor with the default blue, underlined text. Click OK to accept your changes and close the Signatures and Stationery dialog box. Here’s an example of an email message with a link in the signature. Close the Message window using the File tab or the X button in the upper, right corner of the Message window. You can also insert your contact information into your signature as a Business Card. To do so, click Business Card on the editor’s toolbar. On the Insert Business Card dialog box, select the contact you want to insert as a Business Card. Select a size for the Business Card image from the Size drop-down list. Click OK. The Business Card image displays in the Signature Editor. Click OK to accept your changes and close the Signatures and Stationery dialog box. When you insert a Business Card into your signature, the Business Card image displays in the body of the email message and a .vcf file containing your contact information is attached to the email. This .vcf file can be imported into programs like Outlook that support this format. Close the Message window using the File tab or the X button in the upper, right corner of the Message window. You can also insert your Business Card into your signature without the image or without the .vcf file attached. If you want to provide recipients your contact info in a .vcf file, but don’t want to attach it to every email, you can upload the .vcf file to a location on the internet and add a link to the file, such as “Get my vCard,” in your signature. NOTE: If you want to edit your business card, such as applying a different template to it, you must select a different View other than People for your Contacts folder so you can open the full contact editing window.     

    Read the article

  • XNA: Rotating Bones

    - by MLM
    XNA 4.0 I am trying to learn how to rotate bones on a very simple tank model I made in Cinema 4D. It is rigged by 3 bones, Root - Main - Turret - Barrel I have binded all of the objects to the bones so that all translations/rotations work as planned in C4D. I exported it as .fbx I based my test project after: http://create.msdn.com/en-US/education/catalog/sample/simple_animation I can build successfully with no errors but all the rotations I try to do to my bones have no effect. I can transform my Root successfully using below but the bone transforms have no effect: myModel.Root.Transform = world; Matrix turretRotation = Matrix.CreateRotationY(MathHelper.ToRadians(37)); Matrix barrelRotation = Matrix.CreateRotationX(barrelRotationValue); MainBone.Transform = MainTransform; TurretBone.Transform = turretRotation * TurretTransform; BarrelBone.Transform = barrelRotation * BarrelTransform; I am wondering if my model is just not right or something important I am missing in the code. Here is my Game1.cs using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; namespace ModelTesting { /// <summary> /// This is the main type for your game /// </summary> public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; float aspectRatio; Tank myModel; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } /// <summary> /// Allows the game to perform any initialization it needs to before starting to run. /// This is where it can query for any required services and load any non-graphic /// related content. Calling base.Initialize will enumerate through any components /// and initialize them as well. /// </summary> protected override void Initialize() { // TODO: Add your initialization logic here myModel = new Tank(); base.Initialize(); } /// <summary> /// LoadContent will be called once per game and is the place to load /// all of your content. /// </summary> protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); // TODO: use this.Content to load your game content here myModel.Load(Content); aspectRatio = graphics.GraphicsDevice.Viewport.AspectRatio; } /// <summary> /// UnloadContent will be called once per game and is the place to unload /// all content. /// </summary> protected override void UnloadContent() { // TODO: Unload any non ContentManager content here } /// <summary> /// Allows the game to run logic such as updating the world, /// checking for collisions, gathering input, and playing audio. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); // TODO: Add your update logic here float time = (float)gameTime.TotalGameTime.TotalSeconds; // Move the pieces /* myModel.TurretRotation = (float)Math.Sin(time * 0.333f) * 1.25f; myModel.BarrelRotation = (float)Math.Sin(time * 0.25f) * 0.333f - 0.333f; */ base.Update(gameTime); } /// <summary> /// This is called when the game should draw itself. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); // Calculate the camera matrices. float time = (float)gameTime.TotalGameTime.TotalSeconds; Matrix rotation = Matrix.CreateRotationY(MathHelper.ToRadians(45)); Matrix view = Matrix.CreateLookAt(new Vector3(2000, 500, 0), new Vector3(0, 150, 0), Vector3.Up); Matrix projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, graphics.GraphicsDevice.Viewport.AspectRatio, 10, 10000); // TODO: Add your drawing code here myModel.Draw(rotation, view, projection); base.Draw(gameTime); } } } And here is my tank class: using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; namespace ModelTesting { public class Tank { Model myModel; // Array holding all the bone transform matrices for the entire model. // We could just allocate this locally inside the Draw method, but it // is more efficient to reuse a single array, as this avoids creating // unnecessary garbage. public Matrix[] boneTransforms; // Shortcut references to the bones that we are going to animate. // We could just look these up inside the Draw method, but it is more // efficient to do the lookups while loading and cache the results. ModelBone MainBone; ModelBone TurretBone; ModelBone BarrelBone; // Store the original transform matrix for each animating bone. Matrix MainTransform; Matrix TurretTransform; Matrix BarrelTransform; // current animation positions float turretRotationValue; float barrelRotationValue; /// <summary> /// Gets or sets the turret rotation amount. /// </summary> public float TurretRotation { get { return turretRotationValue; } set { turretRotationValue = value; } } /// <summary> /// Gets or sets the barrel rotation amount. /// </summary> public float BarrelRotation { get { return barrelRotationValue; } set { barrelRotationValue = value; } } /// <summary> /// Load the model /// </summary> public void Load(ContentManager Content) { // TODO: use this.Content to load your game content here myModel = Content.Load<Model>("Models\\simple_tank02"); MainBone = myModel.Bones["Main"]; TurretBone = myModel.Bones["Turret"]; BarrelBone = myModel.Bones["Barrel"]; MainTransform = MainBone.Transform; TurretTransform = TurretBone.Transform; BarrelTransform = BarrelBone.Transform; // Allocate the transform matrix array. boneTransforms = new Matrix[myModel.Bones.Count]; } public void Draw(Matrix world, Matrix view, Matrix projection) { myModel.Root.Transform = world; Matrix turretRotation = Matrix.CreateRotationY(MathHelper.ToRadians(37)); Matrix barrelRotation = Matrix.CreateRotationX(barrelRotationValue); MainBone.Transform = MainTransform; TurretBone.Transform = turretRotation * TurretTransform; BarrelBone.Transform = barrelRotation * BarrelTransform; myModel.CopyAbsoluteBoneTransformsTo(boneTransforms); // Draw the model, a model can have multiple meshes, so loop foreach (ModelMesh mesh in myModel.Meshes) { // This is where the mesh orientation is set foreach (BasicEffect effect in mesh.Effects) { effect.World = boneTransforms[mesh.ParentBone.Index]; effect.View = view; effect.Projection = projection; effect.EnableDefaultLighting(); } // Draw the mesh, will use the effects set above mesh.Draw(); } } } }

    Read the article

  • XSLT Transformation of XML File

    - by Russ Clark
    I've written a simple XML Document that I am trying to transform with an XSLT file, but I get no results when I run the code. Here is my XML document: <?xml version="1.0" encoding="utf-8" ?> <Employee xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="XSLT_MVC.Controllers"> <ID>42</ID> <Name>Russ</Name> </Employee> And here is the XSLT file: <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" exclude-result-prefixes="msxsl" xmlns:ex="XSLT_MVC.Controllers" > <xsl:output method="xml" indent="yes"/> <xsl:template match="/"> <xsl:copy> <!--<xsl:apply-templates select="@* | node()"/>--> <xsl:value-of select="ex:Employee/Name"/> </xsl:copy> </xsl:template> </xsl:stylesheet> Here is the code (from a C# console app) I am trying to run to perform the transform: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml; using System.Xml.Xsl; using System.Xml.XPath; namespace XSLT { class Program { static void Main(string[] args) { Transform(); } public static void Transform() { XPathDocument myXPathDoc = new XPathDocument(@"docs\sampledoc.xml"); XslTransform myXslTrans = new XslTransform(); myXslTrans.Load(@"docs\new.xslt"); XmlTextWriter myWriter = new XmlTextWriter( "results.html", null); myXslTrans.Transform(myXPathDoc, null, myWriter); myWriter.Close(); } } } When I run the code I get a blank html file. I think I may have problems with the namespaces, but am not sure. Can anyone help with this?

    Read the article

  • JSON Post To Rails From Android

    - by Stealthnh
    I'm currently working on an android app that interfaces with a Ruby on Rails app through XML and JSON. I can currently pull all my posts from my website through XML but I can't seem to post via JSON. My app currently builds a JSON object from a form that looks a little something like this: { "post": { "other_param": "1", "post_content": "Blah blah blah" } } On my server I believe the Create method in my Posts Controller is set up correctly: def create @post = current_user.posts.build(params[:post]) respond_to do |format| if @post.save format.html { redirect_to @post, notice: 'Post was successfully created.' } format.json { render json: @post, status: :created, location: @post } format.xml { render xml: @post, status: :created, location: @post } else format.html { render action: "new" } format.json { render json: @post.errors, status: :unprocessable_entity } format.xml { render xml: @post.errors, status: :unprocessable_entity } end end end And in my android app I have a method that takes that JSON Object I posted earlier as a parameter along with the username and password for being authenticated (Authentication is working I've tested it, and yes Simple HTTP authentication is probably not the best choice but its a quick and dirty fix) and it then sends the JSON Object through HTTP POST to the rails server. This is that method: public static void sendPost(JSONObject post, String email, String password) { DefaultHttpClient client = new DefaultHttpClient(); client.getCredentialsProvider().setCredentials(new AuthScope(null,-1), new UsernamePasswordCredentials(email,password)); HttpPost httpPost = new HttpPost("http://mysite.com/posts"); JSONObject holder = new JSONObject(); try { holder.put("post", post); StringEntity se = new StringEntity(holder.toString()); Log.d("SendPostHTTP", holder.toString()); httpPost.setEntity(se); httpPost.setHeader("Content-Type","application/json"); } catch (UnsupportedEncodingException e) { Log.e("Error",""+e); e.printStackTrace(); } catch (JSONException js) { js.printStackTrace(); } HttpResponse response = null; try { response = client.execute(httpPost); } catch (ClientProtocolException e) { e.printStackTrace(); Log.e("ClientProtocol",""+e); } catch (IOException e) { e.printStackTrace(); Log.e("IO",""+e); } HttpEntity entity = response.getEntity(); if (entity != null) { try { entity.consumeContent(); } catch (IOException e) { Log.e("IO E",""+e); e.printStackTrace(); } } } Currently when I call this method and pass it the correct JSON Object it doesn't do anything and I have no clue why or how to figure out what is going wrong. Is my JSON still formatted wrong, does there really need to be that holder around the other data? Or do I need to use something other than HTTP POST? Or is this just something on the Rails end? A route or controller that isn't right? I'd be really grateful if someone could point me in the right direction, because I don't know where to go from here.

    Read the article

  • make an http post from server using user credentials - integrated security

    - by opensas
    I'm trying to make a post, from an asp classic server side page, using the user credentials... I'm using msxml2.ServerXMLHTTP to programatically make the post I've tried with several configurations in the IIS 5.1 site, but there's no way I can make IIS run with a specified account... I made a little asp page that runs whoami to verify what account the iis process i using... with IIS 5.1, using integrated security the process uses: my_machine\IWAM_my_machine I disable integrated security, and leave a domain account as anonymous access, and I get the same (¿?) to test the user I do the following private function whoami() dim shell, cmd set shell = createObject("wscript.shell") set cmd = shell.exec( server.mapPath( "whoami.exe" ) ) whoami = cmd.stdOut.readAll() set shell = nothing: set cmd = nothing end function is it because I'm issuing a shell command? I'd like to make http post calls, to another site that works with integrated security... So I need some way to pass the credentials, or at least to run with a specified account, and then configure the remote site to thrust that account... I thought that just setting the site to work with integrated security would be enough... How can I achieve such a thing? ps: with IIS6,happens the same but if I change the pool canfiguration I get the following info from whoami NT AUTHORITY\NETWORK SERVICE NT AUTHORITY\LOCAL SERVICE NT AUTHORITY\SYSTEM if I set a domain account, I get a "service unavailable" message... edit: found this http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/275269ee-1b9f-4869-8d72-c9006b5bd659.mspx?mfr=true it says what I supossed, "If an authenticated user makes a request, the thread token is based on the authenticated account of the user", but somehow I doesn't seem to work like that... what could I possibly be missing? edit: well the whoami thing is obviously fooling me, I tried with the following function private function whoami_db( serverName, dbName ) dim conn, data set conn = server.createObject("adodb.connection") conn.open "Provider=SQLOLEDB.1;Integrated Security=SSPI;" & _ "Initial Catalog=" & dbName & ";Data Source=" & serverName set data = conn.execute( "select suser_sname() as user_name" ) whoami_db = data("user_name") data.close: conn.close set data = nothing: set conn = nothing end function and everything seemed to be working fine... but how can I make msxml2.ServerXMLHTTP work with the user credentials???

    Read the article

  • Jquery hasClass conditional not working

    - by Wade D Ouellet
    Hey, I have my site going here: http://www.treethink.com I am trying to make it so when a navigation item is clicked, it retracts the news ticker on the right and then doesn't run any of the extracting/timer functions anymore. I got it to retract but it still extracts. How I am doing it is I am adding a "noTicker" class with the nav buttons and removing it with colorbox's close button. The function runs on the page initially and if there isn't a "noTicker" class it runs the news ticker as planned. When a nav or close button is clicked it runs the function again which checks again to see if it has that class. So if it has the class it should retract (which it is so that must mean it's adding the class properly) and then not run any of the timer functions, but it still runs the timer functions for some reason. Here is my jQuery. /* Initially hide all news items */ $('#ticker1').hide(); $('#ticker2').hide(); $('#ticker3').hide(); var randomNum = Math.floor(Math.random()*3); /* Pick random number */ newsTicker(); function newsTicker() { if (!$("#ticker").hasClass("noTicker")) { $("#ticker").oneTime(2000,function(i) { /* Do the first pull out once */ $('div#ticker div:eq(' + randomNum + ')').show(); /* Select div with random number */ $("#ticker").animate({right: "0"}, {duration: 800 }); /* Pull out ticker with random div */ }); $("#ticker").oneTime(15000,function(i) { /* Do the first retract once */ $("#ticker").animate({right: "-450"}, {duration: 800}); /* Retract ticker */ $("#ticker").oneTime(1000,function(i) { /* Afterwards */ $('div#ticker div:eq(' + (randomNum) + ')').hide(); /* Hide that div */ }); }); $("#ticker").everyTime(16500,function(i) { /* Everytime timer gets to certain point */ /* Show next div */ randomNum = (randomNum+1)%3; $('div#ticker div:eq(' + (randomNum) + ')').show(); $("#ticker").animate({right: "0"}, {duration: 800}); /* Pull out right away */ $("#ticker").oneTime(15000,function(i) { /* Afterwards */ $("#ticker").animate({right: "-450"}, {duration: 800});/* Retract ticker */ }); $("#ticker").oneTime(16000,function(i) { /* Afterwards */ /* Hide all divs */ $('#ticker1').hide(); $('#ticker2').hide(); $('#ticker3').hide(); }); }); } else { $("#ticker").animate({right: "-450"}, {duration: 800}); /* Retract ticker */ $("#ticker").oneTime(1000,function(i) { /* Afterwards */ $('div#ticker div:eq(' + (randomNum) + ')').hide(); /* Hide that div */ }); } } /* when nav item is clicked re-run news ticker function but give it new class to prevent activity */ $("#nav li").click(function() { $("#ticker").addClass("noTicker"); newsTicker(); }); /* when close button is clicked re-run news ticker function but take away new class so activity can start again */ $("#cboxClose").click(function() { $("#ticker").removeClass("noTicker"); newsTicker(); }); Thanks, Wade

    Read the article

  • How to implement login page using Spring Security so that it works with Spring web flow?

    - by simon
    I have a web application using Spring 2.5.6 and Spring Security 2.0.4. I have implemented a working login page, which authenticates the user against a web service. The authentication is done by defining a custom authentincation manager, like this: <beans:bean id="customizedFormLoginFilter" class="org.springframework.security.ui.webapp.AuthenticationProcessingFilter"> <custom-filter position="AUTHENTICATION_PROCESSING_FILTER" /> <beans:property name="defaultTargetUrl" value="/index.do" /> <beans:property name="authenticationFailureUrl" value="/login.do?error=true" /> <beans:property name="authenticationManager" ref="customAuthenticationManager" /> <beans:property name="allowSessionCreation" value="true" /> </beans:bean> <beans:bean id="customAuthenticationManager" class="com.sevenp.mobile.samplemgmt.web.security.CustomAuthenticationManager"> <beans:property name="authenticateUrlWs" value="${WS_ENDPOINT_ADDRESS}" /> </beans:bean> The authentication manager class: public class CustomAuthenticationManager implements AuthenticationManager, ApplicationContextAware { @Transactional @Override public Authentication authenticate(Authentication authentication) throws AuthenticationException { //authentication logic return new UsernamePasswordAuthenticationToken(principal, authentication.getCredentials(), grantedAuthorityArray); } The essential part of the login jsp looks like this: <c:url value="/j_spring_security_check" var="formUrlSecurityCheck"/> <form method="post" action="${formUrlSecurityCheck}"> <div id="errorArea" class="errorBox"> <c:if test="${not empty param.error}"> ${sessionScope["SPRING_SECURITY_LAST_EXCEPTION"].message} </c:if> </div> <label for="loginName"> Username: <input style="width:125px;" tabindex="1" id="login" name="j_username" /> </label> <label for="password"> Password: <input style="width:125px;" tabindex="2" id="password" name="j_password" type="password" /> </label> <input type="submit" tabindex="3" name="login" class="formButton" value="Login" /> </form> Now the problem is that the application should use Spring Web Flow. After the application was configured to use Spring Web Flow, the login does not work anymore - the form action to "/j_spring_security_check" results in a blank page without error message. What is the best way to adapt the existing login process so that it works with Spring Web Flow?

    Read the article

  • How to Symbolicate iPhone App Crash Reports ?

    - by bluej3
    Hello~ I retrieved the crash reports from iTunes Connect. I referenced this site. http://webcache.googleusercontent.com/search?q=cache:MmxwdXObZLMJ:www.anoshkin.net/blog/2008/09/09/iphone-crash-logs/+iphone+crash+debig&cd=2&hl=en&ct=clnk I tried.... $ symbolicatecrash report.crash MobileLines.app.dSYM report-with-symbols.crash Error in symbol file for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.1.2 (7D11)/Symbols/System/Library/Frameworks/IOKit.framework/Versions/A/IOKit Error in symbol file for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.1.2 (7D11)/Symbols/System/Library/PrivateFrameworks/WebCore.framework/WebCore Error in symbol file for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.1.2 (7D11)/Symbols/System/Library/Frameworks/Foundation.framework/Foundation Error in symbol file for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.1.2 (7D11)/Symbols/usr/lib/libSystem.B.dylib Error in symbol file for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.1.2 (7D11)/Symbols/System/Library/PrivateFrameworks/GraphicsServices.framework/GraphicsServices Error in symbol file for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.1.2 (7D11)/Symbols/System/Library/Frameworks/UIKit.framework/UIKit Error in symbol file for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.1.2 (7D11)/Symbols/System/Library/Frameworks/OpenGLES.framework/MBXGLEngine.bundle/MBXGLEngine Error in symbol file for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.1.2 (7D11)/Symbols/System/Library/Frameworks/AudioToolbox.framework/AudioToolbox Error in symbol file for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/3.1.2 (7D11)/Symbols/System/Library/Frameworks/CoreFoundation.framework/CoreFoundation BUT... I didn't result. (find error message) - This directory is located "bulid/Distribution-iphones" - "MYGAME.app" file and "MYGAME.app.dSYM" file is located in same directory. How can i do solve this problem. ? Please help me :) * Crash log (carsh at thread 2 ) Incident Identifier: 95230C2E-CD83-46BF-8DAE-F38BCD46B910 Process: MYGAMELite [303] Path: /var/mobile/Applications/4FB79BEC-2BF0-438B-82A8-C302CD52A85C/MYGAMELite.app/MYGAMELite Identifier: MYGAMELite Version: ??? (???) Code Type: ARM (Native) Parent Process: launchd [1] Date/Time: 2010-06-03 11:43:52.875 +0800 OS Version: iPhone OS 3.1.2 (7D11) Report Version: 104 Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x03e3a002 Crashed Thread: 2 Thread 2 Crashed: 0 AudioToolbox 0x330d708c AU3DMixerEmbedded::SumInput16(unsigned long, AudioBufferList const&, AudioBufferList const&, unsigned long, float, unsigned long) 1 AudioToolbox 0x330d89a0 AU3DMixerEmbedded::Render(unsigned long&, AudioTimeStamp const&, unsigned long) 2 AudioToolbox 0x32fe6bb8 AUBase::DoRender(unsigned long&, AudioTimeStamp const&, unsigned long, unsigned long, AudioBufferList&) 3 AudioToolbox 0x32fe6504 Render 4 AudioToolbox 0x330160b8 AUInputElement::PullInput(unsigned long&, AudioTimeStamp const&, unsigned long, unsigned long) 5 AudioToolbox 0x33023fa8 AUInputFormatConverter2::InputProc(OpaqueAudioConverter*, unsigned long*, AudioBufferList*, AudioStreamPacketDescription*, void) 6 AudioToolbox 0x32fe4b60 AudioConverterChain::CallInputProc(unsigned long) 7 AudioToolbox 0x32fe4a5c AudioConverterChain::FillBufferFromInputProc(unsigned long*, CABufferList*) 8 AudioToolbox 0x32fe4790 BufferedAudioConverter::GetInputBytes(unsigned long, unsigned long&, CABufferList const*&) 9 AudioToolbox 0x33023e30 CBRConverter::RenderOutput(CABufferList*, unsigned long, unsigned long&, AudioStreamPacketDescription*) 10 AudioToolbox 0x32fe4284 BufferedAudioConverter::FillBuffer(unsigned long&, AudioBufferList&, AudioStreamPacketDescription*) 11 AudioToolbox 0x32fe44a4 AudioConverterChain::RenderOutput(CABufferList*, unsigned long, unsigned long&, AudioStreamPacketDescription*) 12 AudioToolbox 0x32fe4284 BufferedAudioConverter::FillBuffer(unsigned long&, AudioBufferList&, AudioStreamPacketDescription*) 13 AudioToolbox 0x32fe3f10 AudioConverterFillComplexBuffer 14 AudioToolbox 0x33023844 AUConverterBase::RenderBus(unsigned long&, AudioTimeStamp const&, unsigned long, unsigned long) 15 AudioToolbox 0x330ce928 AURemoteIO::RenderBus(unsigned long&, AudioTimeStamp const&, unsigned long, unsigned long) 16 AudioToolbox 0x32fe6bb8 AUBase::DoRender(unsigned long&, AudioTimeStamp const&, unsigned long, unsigned long, AudioBufferList&) 17 AudioToolbox 0x330cf308 AURemoteIO::PerformIO(int, unsigned int, unsigned int, AQTimeStamp const&, AQTimeStamp const&) 18 AudioToolbox 0x330cf4cc AURIOCallbackReceiver_PerformIOSync 19 AudioToolbox 0x330c76fc _XPerformIOSync 20 AudioToolbox 0x330181d8 mshMIGPerform 21 AudioToolbox 0x3309cec8 MSHMIGDispatchMessage 22 AudioToolbox 0x330d48d4 AURemoteIO::IOThread::Entry(void*) 23 AudioToolbox 0x32fc9f20 CAPThread::Entry(CAPThread*) 24 libSystem.B.dylib 0x30b5b7b0 _pthread_body

    Read the article

  • Integrate Google Wave With Your Windows Workflow

    - by Matthew Guay
    Have you given Google Wave a try, only to find it difficult to keep up with?  Here’s how you can integrate Google Wave with your desktop and workflow with some free and simple apps. Google Wave is an online web app, and unlike many Google services, it’s not easily integrated with standard desktop applications.  Instead, you’ll have to keep it open in a browser tab, and since it is one of the most intensive HTML5 webapps available today, you may notice slowdowns in many popular browsers.  Plus, it can be hard to stay on top of your Wave conversations and collaborations by just switching back and forth between the website and whatever else you’re working on.  Here we’ll look at some tools that can help you integrate Google Wave with your workflow, and make it feel more native in Windows. Use Google Wave Directly in Windows What’s one of the best ways to make a web app feel like a native application?  By making it into a native application, of course!  Waver is a free Air powered app that can make the mobile version of Google Wave feel at home on your Windows, Mac, or Linux desktop.  We found it to be a quick and easy way to keep on top of our waves and collaborate with our friends. To get started with Waver, open their homepage on the Adobe Air Marketplace (link below) and click Download From Publisher. Waver is powered by Adobe Air, so if you don’t have Adobe Air installed, you’ll need to first download and install it. After clicking the link above, Adobe Air will open a prompt asking what you wish to do with the file.  Click Open, and then install as normal. Once the installation is finished, enter your Google Account info in the window.   After a few moments, you’ll see your Wave account in miniature, running directly in Waver.  Click a Wave to view it, or click New wave to start a new Wave message.  Unfortunately, in our tests the search box didn’t seem to work, but everything else worked fine. Google Wave works great in Waver, though all of the Wave features are not available since it is running the mobile version of Wave. You can still view content from plugins, including YouTube videos, directly in Waver.   Get Wave Notifications From Your Windows Taskbar Most popular email and Twitter clients give you notifications from your system tray when new messages come in.  And with Google Wave Notifier, you can now get the same alerts when you receive a new Wave message. Head over to the Google Wave Notifier site (link below), and click the download link to get started.  Make sure to download the latest Binary zip, as this one will contain the Windows program rather than the source code. Unzip the folder, and then run GoogleWaveNotifier.exe. On first run, you can enter your Google Account information.  Notice that this is not a standard account login window; you’ll need to enter your email address in the Username field, and then your password below it. You can also change other settings from this dialog, including update frequency and whether or not to run at startup.  Click the value, and then select the setting you want from the dropdown menu. Now, you’ll have a new Wave icon in your system tray.  When it detects new Waves or unread updates, it will display a popup notification with details about the unread Waves.  Additionally, the icon will change to show the number of unread Waves.  Click the popup to open Wave in your browser.  Or, if you have Waver installed, simply open the Waver window to view your latest Waves. If you ever need to change settings again in the future, right-click the icon and select Settings, and then edit as above. Get Wave Notifications in Your Email  Most of us have Outlook or Gmail open all day, and seldom leave the house without a Smartphone with push email.  And thanks to a new Wave feature, you can still keep up with your Waves without having to change your workflow. To activate email notifications from Google Wave, login to your Wave account, click the arrow beside your Inbox, and select Notifications. Select how quickly you want to receive notifications, and choose which email address you wish to receive the notifications.  Click Save when you’re finished. Now you’ll receive an email with information about new and updated Waves in your account.  If there were only small changes, you may get enough info directly in the email; otherwise, you can click the link and open that Wave in your browser. Conclusion Google Wave has great potential as a collaboration and communications platform, but by default it can be hard to keep up with what’s going on in your Waves.  These apps for Windows help you integrate Wave with your workflow, and can keep you from constantly logging in and checking for new Waves.  And since Google Wave registration is now open for everyone, it’s a great time to give it a try and see how it works for yourself. Links Signup for Google Wave (Google Account required) Download Waver from the Adobe Air Marketplace Download Google Wave Notifier Similar Articles Productive Geek Tips We Have 20 Google Wave Invites. Want One?Tired of Waiting for Google Wave? Try ShareFlow NowIntegrate Google Docs with Outlook the Easy WayAwesome Desktop Wallpapers: The Windows 7 EditionWeek in Geek: The Stupid Geek Tricks to Hide Extra Windows Edition TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Default Programs Editor – One great tool for Setting Defaults Convert BMP, TIFF, PCX to Vector files with RasterVect Free Identify Fonts using WhatFontis.com Windows 7’s WordPad is Actually Good Greate Image Viewing and Management with Zoner Photo Studio Free Windows Media Player Plus! – Cool WMP Enhancer

    Read the article

  • PHP Doctrine frustration: loading models doesn't work..?

    - by ropstah
    I'm almost losing it, i really hope someone can help me out! I'm using Doctrine with CodeIgniter. Everything is setup correctly and works until I generate the classes and view the website. Fatal error: Class 'BaseObjecten' not found in /var/www/vhosts/domain.com/application/models/Objecten.php on line 13 I'm using the following bootstrapper (as CodeIgniter plugin): <?php // system/application/plugins/doctrine_pi.php // load Doctrine library require_once BASEPATH . '/plugins/Doctrine/lib/Doctrine.php'; // load database configuration from CodeIgniter require_once APPPATH.'/config/database.php'; // this will allow Doctrine to load Model classes automatically spl_autoload_register(array('Doctrine', 'autoload')); // we load our database connections into Doctrine_Manager // this loop allows us to use multiple connections later on foreach ($db as $connection_name => $db_values) { // first we must convert to dsn format $dsn = $db[$connection_name]['dbdriver'] . '://' . $db[$connection_name]['username'] . ':' . $db[$connection_name]['password']. '@' . $db[$connection_name]['hostname'] . '/' . $db[$connection_name]['database']; Doctrine_Manager::connection($dsn,$connection_name); } // CodeIgniter's Model class needs to be loaded require_once BASEPATH.'/libraries/Model.php'; // telling Doctrine where our models are located Doctrine::loadModels(APPPATH.'/models'); // (OPTIONAL) CONFIGURATION BELOW // this will allow us to use "mutators" Doctrine_Manager::getInstance()->setAttribute( Doctrine::ATTR_AUTO_ACCESSOR_OVERRIDE, true); // this sets all table columns to notnull and unsigned (for ints) by default Doctrine_Manager::getInstance()->setAttribute( Doctrine::ATTR_DEFAULT_COLUMN_OPTIONS, array('notnull' => true, 'unsigned' => true)); // set the default primary key to be named 'id', integer, 4 bytes Doctrine_Manager::getInstance()->setAttribute( Doctrine::ATTR_DEFAULT_IDENTIFIER_OPTIONS, array('name' => 'id', 'type' => 'integer', 'length' => 4)); ?> Anyone? p.s. I also tried adding the following right after // (OPTIONAL CONFIGURATION) Doctrine_Manager::getInstance()->setAttribute(Doctrine::ATTR_MODEL_LOADING, Doctrine::MODEL_LOADING_CONSERVATIVE); spl_autoload_register(array('Doctrine', 'modelsAutoload'));

    Read the article

  • Using Open MQ as an Oracle CEP Event Source

    - by seth.white
    I helped an Oracle CEP customer recently who wanted to use Open MQ has an event source for their Oracle CEP application.  In this case, the Oracle CEP application was being used to provide monitoring for an electronic commerce website, however, the steps for configuring Open MQ are entirely independent of the application logic. I thought I would list the configuration steps in a blog post in case they might help others in the future. Note that although the Oracle CEP documentation states that only WebLogic and Tibco JMS are "officially" supported, any JMS implementation that provides a Java client should work with Oracle CEP. The first step is to add an adapter to the application's EPN. This can be done in the usual way, using the Eclipse IDE. The end result is something like the following bit of configuration in the application's Spring application context. Note that the provider attribute value of 'jms-inbound' specifies that the out-of-the-box JMS adapter is being used. <wlevs:adapter id="helloworldAdapter" provider="jms-inbound"> </wlevs:adapter>   Next, configure the inbound adapter so that it can connect to Open MQ in the Oracle CEP configuration file (config.xml). The snippet below provides an example of what this configuration should look like. The exact values specified for jndi-provider-url, jndi-factory, connection-jndi-name, destination-jndi-name elements will depend on your Open MQ configuration.  For example , if the name of your Open MQ topic destination is 'ElectronicCommerceTopic', then you would specify that as the destination-jndi-name.  The name of your Open MQ connection factory goes in the connection-jndi-name element. In my simple example, I also specify in event-type element so that the out-of-the-box JMS adapter will attempt to automatically convert incoming messages to events of type HelloWorldEvent. In a more complex application, one would configure a custom converter on the JMS adapter to convert from messages to events.  The Oracle CEP 11.1.3 documentation describes how to do this.   <jms-adapter> <name>helloworldAdapter</name> <event-type>HelloWorldEvent</event-type> <jndi-provider-url>file:///C:/Temp</jndi-provider-url> <jndi-factory>com.sun.jndi.fscontext.RefFSContextFactory</jndi-factory> <connection-jndi-name>YourJMSConnectionFactoryName</connection-jndi-name> <destination-jndi-name>YourJMSDestinationName</destination-jndi-name> </jms-adapter>   Finally, one needs to package the client-side Open MQ jars so that the classes that they contain are available to the Oracle CEP runtime. The recommended way for doing this in the Oracle CEP 11.1.3 release is to package the classes as a library module or simply place them in the application bundle.  The advantage of deploying the classes as a library module is that they are available to any application that wants to connect to Open MQ. In my case, I packaged the classes in my application bundle. A best practice when you want to include additional jars in your application bundle is to create a 'lib' directory in your Eclipse project and then copy the required jars into that directory.  Then, use the support that Eclipse provides to add the jars to the bundle classpath (which makes the classes part of your application in the same way that regular application classes are), and export all of the classes from your application bundle so that they are available to the Oracle CEP server runtime.  The screenshot below Illustrates how this is done in Eclipse.  The bundle classpath contains two Open MQ jars and all packages in the jars are exported.     Finally, import the javax.jms and javax.naming packages into the application module as these are needed by the Open MQ classes. The screenshot below shows the complete list of package imports for my sample application.       Once you have completed these steps, you should be able to build and deploy your application and begin receiving inbound messages from Open MQ. Technorati Tags: CEP,JMS,Adapter,Open MQ,Eclipse .csharpcode { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { margin: 0em } .csharpcode .rem { color: #008000 } .csharpcode .kwrd { color: #0000ff } .csharpcode .str { color: #006080 } .csharpcode .op { color: #0000c0 } .csharpcode .preproc { color: #cc6633 } .csharpcode .asp { background-color: #ffff00 } .csharpcode .html { color: #800000 } .csharpcode .attr { color: #ff0000 } .csharpcode .alt { background-color: #f4f4f4; margin: 0em; width: 100% } .csharpcode .lnum { color: #606060 } .csharpcode { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { margin: 0em } .csharpcode .rem { color: #008000 } .csharpcode .kwrd { color: #0000ff } .csharpcode .str { color: #006080 } .csharpcode .op { color: #0000c0 } .csharpcode .preproc { color: #cc6633 } .csharpcode .asp { background-color: #ffff00 } .csharpcode .html { color: #800000 } .csharpcode .attr { color: #ff0000 } .csharpcode .alt { background-color: #f4f4f4; margin: 0em; width: 100% } .csharpcode .lnum { color: #606060 } .csharpcode { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { margin: 0em } .csharpcode .rem { color: #008000 } .csharpcode .kwrd { color: #0000ff } .csharpcode .str { color: #006080 } .csharpcode .op { color: #0000c0 } .csharpcode .preproc { color: #cc6633 } .csharpcode .asp { background-color: #ffff00 } .csharpcode .html { color: #800000 } .csharpcode .attr { color: #ff0000 } .csharpcode .alt { background-color: #f4f4f4; margin: 0em; width: 100% } .csharpcode .lnum { color: #606060 }

    Read the article

  • WebBrowser Control in ATL window. How to free up memory on window unload? I'm stuck.

    - by Martin
    Hello there. I have a Win32 C++ Application. There is the _tWinMain(...) Method with GetMessage(...) in a while loop at the end. Before GetMessage(...) I create the main window with HWND m_MainHwnd = CreateWindowExW(WS_EX_TOOLWINDOW | WS_EX_LAYERED, CAxWindow::GetWndClassName(), _TEXT("http://www.-website-.com"), WS_POPUP, 0, 0, 1024, 768, NULL, NULL, m_Instance, NULL); ShowWindow(m_MainHwnd) If I do not create the main window, my application needs about 150K in memory. But with creating the main window with the WebBrowser Control inside, the memory usage increases to 8500K. But, I want to dynamically unload the main window. My _tWinMain(...) keeps running! Im unloading with DestroyWindow(m_MainHwnd) But the WebBrowser control won't unload and free up it's memory used! Application memory used is still 8500K! I can also get the WebBrowser Instance or with some additional code the WebBrowser HWND IWebBrowser2* m_pWebBrowser2; CAxWindow wnd = (CAxWindow)m_MainHwnd; HRESULT hRet = wnd.QueryControl(IID_IWebBrowser2, (void**)&m_pWebBrowser2); So I tried to free up the memory used by main window and WebBrowser control with (let's say it's experimental): if(m_pWebBrowser2) m_pWebBrowser2->Release(); DestroyWindow(m_hwndWebBrowser); //<-- just analogous OleUninitialize(); No success at all. I also created a wrapper class which creates the main window. I created a pointer and freed it up with delete: Wrapper* wrapper = new Wrapper(); //wrapper creates main window inside and shows it //...do some stuff delete(wrapper); No success. Still 8500K. So please, how can I get rid of the main window and it's WebBrowser control and free up the memory, returning to about 150K. Later I will recreate the window. It's a dynamically load and unload of the main window, depending on other commands. Thanks! Regards Martin

    Read the article

  • Problem with JSON, Struts 2 and Ajax.

    - by Javi
    Hello, I have an apllication with Struts 2, Spring and I want to send JSON as a response to an Ajax request, but my server in not sending the JSON in the response. I have this base package: <package name="myApp-default" extends="struts-default"> <result-types> <result-type name="tiles" class="org.apache.struts2.views.tiles.TilesResult" /> <result-type name="json" class="com.googlecode.jsonplugin.JSONResult"> <param name="enableGZIP">true</param> <param name="noCache">true</param> <param name="defaultEncoding">ISO-8859-1</param> </result-type> </result-types> </package> And another package which extends the previous one. <package namespace="/rest" name="rest" extends="myApp-default"> <action name="login" class="restAction" method="login"> <result type="json"/> </action> So I call with jQuery Ajax and debugging it I see it enters in the Action restAction in the method login and it also enters in the method getJsonData() because I have set two breakpoints and the program is stopped first in login and then in getJsonData. public class RestAction extends BaseAction { private String login; private String password; private String jsonData; public String login() { jsonData = "changed"; return Action.SUCCESS; } //I ommit getter/setters for login and password @JSON(name="jsonData") public String getJsonData() { return jsonData; } public void setJsonData(String jsonData) { this.jsonData = jsonData; } } My ajax looks like this: $.ajax({type: 'GET', url: this_url, data: pars, cache:false, success: handleResponse, error: handleError}); With firebug I see that the response to my request is completely void (though I have seen that the data saved in pars variable has been populated to the action and the methods have been executed correctly). And maybe for that reason my ajax enters in my handleError function where xmlHttpRequest.readyState is 4 and xmlHttpRequest.status is 0. Can anyone tell me what may the problem be? Thanks.

    Read the article

  • NVIDIA x server - "sudo nvidia config" does not generate a working 'xorg.config'

    - by Mike
    I am over 18 hours deep on this challenge. I got to this point and am stuck. very stuck. Maybe you can figure it out? Ubuntu Version 12.04 LTS with all the updates installed. Problem: The default settings in "etc/X11/xorg.conf" that are generated by the "nvidia-xconfig" tool, do not allow the NVIDIA x server to connect to the driver in my "System Settings Additional Driver window". (that's how I understand it. Lots of information below). Symptoms of Problem "System Settings Additional Driver" window has drivers, but the nvidia x server cannot connect/utilize any of the 4 drivers. the drivers are activated, but not in use. When I go to "System Tools Administration NVIDIA x server settings" I get an error that basically tells me to create a default file to initialize the NVIDIA X server (screen shot below). This is the messages the terminal gives after running a "sudo nvidia-xconfig" command for the first time. It seems that the generated file by the tool i just ran is generating a bad/unusable file: If I run the "sudo nvidia-xconfig" command again, I wont get an error the second time. However when I reboot, the default file that is generated (etc/X11/xorg.conf) simply puts the screen resolution at 800 x 600 (or something big like that). When I try to go to NVIDIA x server settings I am greeted with the same screen as the screen shot as in symptom 2 (no option to change the resolution). If I try to go to "system settings display" there are no other resolutions to choose from. At this point I must delete the newly minted "xorg.conf" and reinstate the original in its place. Here are the contents of the "xorg.conf" that is generated first (the one missing required information): # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 304.88 (buildmeister@swio-display-x86-rhel47-06) Wed Mar 27 15:32:58 PDT 2013 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 28.0 - 33.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection Hardware: I ran the "lspci|grep VGA". There results are: 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GF108 [Quadro 1000M] (rev a1) More Hardware info: Ram: 16GB CPU: Intel Core i7-2720QM @2.2GHz * 8 Other: 64 bit. This is a triple boot computer and not a VM. Attempts With Not Success on My End: 1) Tried to append the "xorg.conf" with what I perceive is missing information and obviously it didn't fly. 2) All the other stuff I tried got me to this point. 3) See if this link is helpful to you (I barely get it, but i get enough knowing that a smarter person might find this useful): http://manpages.ubuntu.com/manpages/lucid/man1/nvidia-xconfig.1.html 4) I am completely new to Linux (40 hours over past week), but not to programming. However I am very serious about changing over to Linux. When you respond (I hope someone responds...) please respond in a way that a person new to Linux can understand. 5) By the way, the reason I am in this mess is because I MUST have a second monitor running from my laptop, and "System Settings Display" doesn't recognize my second display. I know it is possible to make the second display work in my system, because when I boot from the install CD, I perform work on the native laptop monitor, but the second monitor shows a purple screen with Ubuntu in the middle, so I know the VGA port is sending a signal out. If this is too much for you to tackle please suggest an alternative method to get a second display. I don't want to go to windows but I cannot have a single display. I am really fudged here. I hope some smart person can help. Thanks in advance. Mike. **********************EDIT #1********************** More Details About Graphics Card I was asked "which brand of nvidia-card do you have exactly?" Here is what I did to provide more info (maybe relevant, maybe not, but here is everything): 1) Took my Lenovo W520 right apart to see if there is an identifier on the actual card. However I realized that if I get deep enough to take a look, the laptop "won't like it". so I put it back together. Figuring out the card this way is not an option for me right now. 2) (My computer is triple boot) I logged into Win7 and ran 'dxdiag' command. here is the screen shot: 3) I tried to look on the lenovo website for more details... but no luck. I took a look at my receipts and here is info form receipt: System Unit: W520 NVIDIA Quadro 1000M 2GB 4) In win7 I went to the NVIDIA website and used the option to have my card 'scanned' by a Java applet to determine the latest update for my card. I tried the same with Ubuntu but I can't get the applet to run. Here is the recommended driver from from the NVIDIA Applet for my card for Win7 (I hope this shines some light on the specifics of the card): Quadro/NVS/Tesla/GRID Desktop Driver Release R319 Version: 320.00 WHQL Release Date: 3.5.2013 5) Also I went on the NVIDIA driver search and looked through every possible combination of product type + product series + product to find all the combinations that yield a 1000M card. My card is: Product Type: Quadro Product Series: Quadro Series (Notebooks) Product: 1000M ***********************EDIT #2******************* Additional Symptoms Another question that generated more symptoms I previously didn't mention was: "After generating xorg.conf by nvidia-xconfig, go to additional drivers, do you see nvidia-304?" 1) I took a screen shot of the "additional drivers" right after generating xorg.conf by nvidia-xconfig. Here it is: 2) Then I did a reboot. Now Ubuntu is 600 x 800 resolution. When I logged in after the computer came up I got an error (which I always get after generating xorg.conf by nvidia-xconfig and rebooting) 3) To finally answer the question - No. There is no "NVIDIA-304" driver. Screen shot of additional drivers after generating xorg.conf by nvidia-xconfig and rebooting : At this point I revert to the original xorg.conf and delete the xorg.conf generated by Nvidia.

    Read the article

  • How to detect page zoom level in all modern browsers?

    - by understack
    How can I detect page zoom level in all modern browsers? While this thread tells how to do it in IE7 and IE8, I can't find good solution for FF, Safari and Chrome. For FF one of the suggested solution FF stores page zoom level for future. So on first page load would I be able to get zoom level? Somewhere I read it works if you're changing zoom level on that page after its loaded. Is there a way to trap 'zoom' event? I need this because some of my calculations are based on no of pixels and they get changed on Zoom. Thanks. Modified sample given by tfl. This would alert different height based on zoom level. <html> <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.1/jquery.min.js" type="text/javascript"/></script> <title></title> </head> <body> <div id="xy" style="border:1px solid #f00; width:100px;"> Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque sollicitudin tortor in lacus tincidunt volutpat. Integer dignissim imperdiet mollis. Suspendisse quis tortor velit, placerat tempor neque. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Praesent bibendum auctor lorem vitae tempor. Nullam condimentum aliquam elementum. Nullam egestas gravida elementum. Maecenas mattis molestie nisl sit amet vehicula. Donec semper tristique blandit. Vestibulum adipiscing placerat mollis. </div> <div> <button onclick="alert($('#xy').height());">Show</button> </div> </body> </html>

    Read the article

  • Would Apache running on port 8080 prevent dynamically loaded scripts in JavaScript?

    - by editor
    Had a nice PHP/HTML/JS prototype working on my personal Linode, then tried to throw it into a work machine. The page adds a script tag dynamically with some JavaScript. It's a bunch of Google charts that update based on different timeslices. That code looks something like this: // jQuery $.post to send the beginning and end timestamps $.post("channel_functions.php", data_to_post, function(data){ // the data that's returned is the javascript I want to load var script = document.createElement('script'); var head= document.getElementsByTagName('head')[0]; var script= document.createElement('script'); var text = document.createTextNode(data); script.type= 'text/javascript'; script.id = 'chart_data'; script.appendChild(text); // Adding script tag to page head.appendChild(script); // Call the function I know were present in the script tag loadTheCharts(); }); function loadTheCharts() { // These are the functions that were loaded dynamically // By this point the script tag is supposed be loaded, added and eval'd function1(); function2(); } Function1() and function2() don't exist until they get added to the dom, but I don't call loadTheCharts() until after the $.post has run so this doesn't seem to be a problem. I'm one of those dirty PHP coders you mother warned you about, so I'm not well versed in JavaScript beyond what I've read in the typical go-to O'Reilly books. But this code worked fine on my personal dev server, so I'm wondering why it wouldn't work on this new machine. The only difference in setup, from what I can tell, is that the new machine is running on port 8080, so it's 192.168.blah.blah:8080/index.php instead of nicedomain.com/index.php. I see the code was indeed added to the dom when I use webmaster tools to "view generated source" but in Firebug I get an error like "function2() is undefined" even though my understanding was that all script tags are eval'ed when added to . My question: Given what I've laid out, and that the machine is running on :8080, is there a reason anyone can think of as to why a dynamically loaded function like function2() would be defined on the Linode and not on the machine running Apache on 8080?

    Read the article

  • Fragment not showing up and breaks other buttons on Layout

    - by Devin Crane
    I'm learning how to use Fragments, and trying to add a fragment tag_button.xml at runtime to a FrameLayout tagFragmentContainer deep within another layout, deepLayout.xml. I get no errors, but the fragment doesn't show up. When I make its container visible, I can see a small sliver of layout between the other elements already existing, but then it disappears after onCreateView(), and all the remaining buttons are broken, which is even more confusing to me. tag_button.xml is just a regular layout file with some text and a button. tagFragmentContainer, within a LinearLayout in the middle of a large layout file, deepLayout.xml: <LinearLayout android:id="@+id/tagButtonsLayout" android:layout_marginBottom="10dip" android:visibility="gone" style="@style/Form"> <FrameLayout android:id="@+id/tagFragmentContainer" android:layout_marginBottom="10dip" style="@style/Form"> </FrameLayout> </LinearLayout> In deepLayoutActivity, I make visible tagButtonsLayout and start TagButtonActivity: final LinearLayout anotherlm = (LinearLayout) findViewById(R.id.tagButtonsLayout); anotherlm.setVisibility(View.VISIBLE); Intent i = new Intent (this, TagButtonActivity.class); startActivity(i); TagButtonActivity is as follows: public class TagButtonActivity extends FragmentActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.action_expense); if (savedInstanceState != null) return; TagButtonFragment firstFragment = new TagButtonFragment(); getSupportFragmentManager().beginTransaction().add(R.id.tagFragmentContainer, firstFragment, "tagOne").commit(); } } TagButtonFragment: public class TagButtonFragment extends Fragment { @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { // Inflate the layout for this fragment return inflater.inflate(R.layout.tag_button, container, false); } } tag_button.xml: <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" style="@style/Form"> <TextView android:id="@+id/tag_button_header" style="@style/FieldHeader" android:text="example text"/> <RelativeLayout android:id="@+id/tag_button_block" android:layout_width="match_parent" android:layout_marginLeft="2dip" android:layout_marginTop="3dip" android:layout_marginBottom="3dip" android:layout_marginRight="2dip" android:layout_height="43dip" android:clickable="true" android:background="@drawable/row_spinner_selector"> <ImageView android:id="@+id/question_arrow" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginLeft="10dip" android:layout_marginRight="10dip" android:layout_alignParentRight="true" android:layout_centerVertical="true" android:src="@drawable/arrow_right"/> <TextView android:id="@+id/tag_question_text" android:layout_toLeftOf="@id/question_arrow" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_marginLeft="10dip" android:layout_centerVertical="true" android:textColor="@android:color/black" android:singleLine="true" android:ellipsize="end" android:textSize="15sp" android:text="@string/not_selected"/> </RelativeLayout> </LinearLayout> I would certainly appreciate any help in figuring this out from someone who knows fragments better than me! Thanks!

    Read the article

  • Rendering HTML in Java

    - by ferronrsmith
    I am trying to create a help panel for an application I am working on. The help file as already been created using html technology and I would like it to be rendered in a pane and shown. All the code I have seen shows how to render a site e.g. "http://google.com". I want to render a file from my pc e.g. "file://c:\tutorial.html" This is the code i have, but it doesn't seem to be working. import javax.swing.JEditorPane; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.JScrollPane; import javax.swing.SwingUtilities; import java.awt.Color; import java.awt.Container; import java.io.IOException; import static java.lang.System.err; import static java.lang.System.out; final class TestHTMLRendering { // ------------------------------ CONSTANTS ------------------------------ /** * height of frame in pixels */ private static final int height = 1000; /** * width of frame in pixels */ private static final int width = 1000; private static final String RELEASE_DATE = "2007-10-04"; /** * title for frame */ private static final String TITLE_STRING = "HTML Rendering"; /** * URL of page we want to display */ private static final String URL = "file://C:\\print.html"; /** * program version */ private static final String VERSION_STRING = "1.0"; // --------------------------- main() method --------------------------- /** * Debugging harness for a JFrame * * @param args command line arguments are ignored. */ @SuppressWarnings( { "UnusedParameters" } ) public static void main( String args[] ) { // Invoke the run method on the Swing event dispatch thread // Sun now recommends you call ALL your GUI methods on the Swing // event thread, even the initial setup. // Could also use invokeAndWait and catch exceptions SwingUtilities.invokeLater( new Runnable() { /** * } fire up a JFrame on the Swing thread */ public void run() { out.println( "Starting" ); final JFrame jframe = new JFrame( TITLE_STRING + " " + VERSION_STRING ); Container contentPane = jframe.getContentPane(); jframe.setSize( width, height ); contentPane.setBackground( Color.YELLOW ); contentPane.setForeground( Color.BLUE ); jframe.setDefaultCloseOperation( JFrame.EXIT_ON_CLOSE ); try { out.println( "acquiring URL" ); JEditorPane jep = new JEditorPane( URL ); out.println( "URL acquired" ); JScrollPane jsp = new JScrollPane( jep, JScrollPane.VERTICAL_SCROLLBAR_AS_NEEDED, JScrollPane.HORIZONTAL_SCROLLBAR_AS_NEEDED ); contentPane.add( jsp ); } catch ( IOException e ) { err.println( "can't find URL" ); contentPane.add( new JLabel( "can't find URL" ) ); } jframe.validate(); jframe.setVisible( true ); // Shows page, with HTML comments erroneously displayed. // The links are not clickable. } } ); }// end main }// end TestHTMLRendering

    Read the article

  • Enable Proxy Authentication for normal windows application.

    - by Lalit
    my company's internet works on proxy server with Authentication(i.e. Browser prompts with window for username/password everytime i tried to access any web page). Now i have some windows application which tries to access internet(like WebPI/Visual Studio 2008 for rss feeds), but as they are unable to popup the the authentication window, they are unable to connect with internet with error: (407) Proxy Authentication Required . Here the exception is VS2008, first time it always fails to load rss feeds on startup page, but when i click on the link, it shows authentication window and everything works fine after that. my question is: How can i configure normal windows application(through app.config/app.manifest file) accessing web to able to show the authentication window or provide default credentials. For explore this furthor, i have created one console application on VS2008 which tries to serach something on google and display the result on console. Code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Net; using System.IO; namespace WebAccess.Test { class Program { static void Main(string[] args) { Console.WriteLine("Enter Serach Criteria:"); string criteria = Console.ReadLine(); string baseAddress = "http://www.google.com/search?q="; string output = ""; try { // Create the web request HttpWebRequest request = WebRequest.Create(baseAddress + criteria) as HttpWebRequest; // Get response using (HttpWebResponse response = request.GetResponse() as HttpWebResponse) { // Get the response stream StreamReader reader = new StreamReader(response.GetResponseStream()); // Console application output output = reader.ReadToEnd(); } Console.WriteLine("\nResponse : \n\n{0}", output); } catch (Exception ex) { Console.WriteLine("\nError : \n\n{0}", ex.ToString()); } } } } When running this, It gives error Enter Serach Criteria: Lalit Error : System.Net.WebException: The remote server returned an error: (407) Proxy Authen tication Required. at System.Net.HttpWebRequest.GetResponse() at WebAccess.Test.Program.Main(String[] args) in D:\LK\Docs\VS.NET\WebAccess. Test\WebAccess.Test\Program.cs:line 26 Press any key to continue . . .

    Read the article

  • UIImagePickerController Memory Leak

    - by Watson
    I am seeing a huge memory leak when using UIImagePickerController in my iPhone app. I am using standard code from the apple documents to implement the control: UIImagePickerController* imagePickerController = [[UIImagePickerController alloc] init]; imagePickerController.delegate = self; if ([UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera]) { switch (buttonIndex) { case 0: imagePickerController.sourceType = UIImagePickerControllerSourceTypeCamera; [self presentModalViewController:imagePickerController animated:YES]; break; case 1: imagePickerController.sourceType = UIImagePickerControllerSourceTypePhotoLibrary; [self presentModalViewController:imagePickerController animated:YES]; break; default: break; } } And for the cancel: -(void) imagePickerControllerDidCancel:(UIImagePickerController *)picker { [[picker parentViewController] dismissModalViewControllerAnimated: YES]; [picker release]; } The didFinishPickingMediaWithInfo callback is just as stanard, although I do not even have to pick anything to cause the leak. Here is what I see in instruments when all I do is open the UIImagePickerController, pick photo library, and press cancel, repeatedly. As you can see the memory keeps growing, and eventually this causes my iPhone app to slow down tremendously. As you can see I opened the image picker 24 times, and each time it malloc'd 128kb which was never released. Basically 3mb out of my total 6mb is never released. This memory stays leaked no matter what I do. Even after navigating away from the current controller, is remains the same. I have also implemented the picker control as a singleton with the same results. Here is what I see when I drill down into those two lines: Any help here would be greatly appreciated! Again, I do not even have to choose an image. All I do is present the controller, and press cancel. Update 1 I downloaded and ran apple's example of using the UIIMagePickerController and I see the same leak happening there when running instruments (both in simulator and on the phone). http://developer.apple.com/library/ios/#samplecode/PhotoPicker/Introduction/Intro.html%23//apple_ref/doc/uid/DTS40010196 All you have to do is hit the photo library button and hit cancel over and over, you'll see the memory keep growing. Any ideas? Update 2 I only see this problem when viewing the photo library. I can choose take photo, and open and close that one over and over, without a leak.

    Read the article

  • Big Data Matters with ODI12c

    - by Madhu Nair
    contributed by Mike Eisterer On October 17th, 2013, Oracle announced the release of Oracle Data Integrator 12c (ODI12c).  This release signifies improvements to Oracle’s Data Integration portfolio of solutions, particularly Big Data integration. Why Big Data = Big Business Organizations are gaining greater insights and actionability through increased storage, processing and analytical benefits offered by Big Data solutions.  New technologies and frameworks like HDFS, NoSQL, Hive and MapReduce support these benefits now. As further data is collected, analytical requirements increase and the complexity of managing transformations and aggregations of data compounds and organizations are in need for scalable Data Integration solutions. ODI12c provides enterprise solutions for the movement, translation and transformation of information and data heterogeneously and in Big Data Environments through: The ability for existing ODI and SQL developers to leverage new Big Data technologies. A metadata focused approach for cataloging, defining and reusing Big Data technologies, mappings and process executions. Integration between many heterogeneous environments and technologies such as HDFS and Hive. Generation of Hive Query Language. Working with Big Data using Knowledge Modules  ODI12c provides developers with the ability to define sources and targets and visually develop mappings to effect the movement and transformation of data.  As the mappings are created, ODI12c leverages a rich library of prebuilt integrations, known as Knowledge Modules (KMs).  These KMs are contextual to the technologies and platforms to be integrated.  Steps and actions needed to manage the data integration are pre-built and configured within the KMs.  The Oracle Data Integrator Application Adapter for Hadoop provides a series of KMs, specifically designed to integrate with Big Data Technologies.  The Big Data KMs include: Check Knowledge Module Reverse Engineer Knowledge Module Hive Transform Knowledge Module Hive Control Append Knowledge Module File to Hive (LOAD DATA) Knowledge Module File-Hive to Oracle (OLH-OSCH) Knowledge Module  Nothing to beat an Example: To demonstrate the use of the KMs which are part of the ODI Application Adapter for Hadoop, a mapping may be defined to move data between files and Hive targets.  The mapping is defined by dragging the source and target into the mapping, performing the attribute (column) mapping (see Figure 1) and then selecting the KM which will govern the process.  In this mapping example, movie data is being moved from an HDFS source into a Hive table.  Some of the attributes, such as “CUSTID to custid”, have been mapped over. Figure 1  Defining the Mapping Before the proper KM can be assigned to define the technology for the mapping, it needs to be added to the ODI project.  The Big Data KMs have been made available to the project through the KM import process.   Generally, this is done prior to defining the mapping. Figure 2  Importing the Big Data Knowledge Modules Following the import, the KMs are available in the Designer Navigator. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Figure 3  The Project View in Designer, Showing Installed IKMs Once the KM is imported, it may be assigned to the mapping target.  This is done by selecting the Physical View of the mapping and examining the Properties of the Target.  In this case MOVIAPP_LOG_STAGE is the target of our mapping. Figure 4  Physical View of the Mapping and Assigning the Big Data Knowledge Module to the Target Alternative KMs may have been selected as well, providing flexibility and abstracting the logical mapping from the physical implementation.  Our mapping may be applied to other technologies as well. The mapping is now complete and is ready to run.  We will see more in a future blog about running a mapping to load Hive. To complete the quick ODI for Big Data Overview, let us take a closer look at what the IKM File to Hive is doing for us.  ODI provides differentiated capabilities by defining the process and steps which normally would have to be manually developed, tested and implemented into the KM.  As shown in figure 5, the KM is preparing the Hive session, managing the Hive tables, performing the initial load from HDFS and then performing the insert into Hive.  HDFS and Hive options are selected graphically, as shown in the properties in Figure 4. Figure 5  Process and Steps Managed by the KM What’s Next Big Data being the shape shifting business challenge it is is fast evolving into the deciding factor between market leaders and others. Now that an introduction to ODI and Big Data has been provided, look for additional blogs coming soon using the Knowledge Modules which make up the Oracle Data Integrator Application Adapter for Hadoop: Importing Big Data Metadata into ODI, Testing Data Stores and Loading Hive Targets Generating Transformations using Hive Query language Loading Oracle from Hadoop Sources For more information now, please visit the Oracle Data Integrator Application Adapter for Hadoop web site, http://www.oracle.com/us/products/middleware/data-integration/hadoop/overview/index.html Do not forget to tune in to the ODI12c Executive Launch webcast on the 12th to hear more about ODI12c and GG12c. Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • How to use Azure storage for uploading and displaying pictures.

    - by Magnus Karlsson
    Basic set up of Azure storage for local development and production. This is a somewhat completion of the following guide from http://www.windowsazure.com/en-us/develop/net/how-to-guides/blob-storage/ that also involves a practical example that I believe is commonly used, i.e. upload and present an image from a user.   First we set up for local storage and then we configure for them to work on a web role. Steps: 1. Configure connection string locally. 2. Configure model, controllers and razor views.   1. Setup connectionsstring 1.1 Right click your web role and choose “Properties”. 1.2 Click Settings. 1.3 Add setting. 1.4 Name your setting. This will be the name of the connectionstring. 1.5 Click the ellipsis to the right. (the ellipsis appear when you mark the area. 1.6 The following window appears- Select “Windows Azure storage emulator” and click ok.   Now we have a connection string to use. To be able to use it we need to make sure we have windows azure tools for storage. 2.1 Click Tools –> Library Package manager –> Manage Nuget packages for solution. 2.2 This is what it looks like after it has been added.   Now on to what the code should look like. 3.1 First we need a view which collects images to upload. Here Index.cshtml. 1: @model List<string> 2:  3: @{ 4: ViewBag.Title = "Index"; 5: } 6:  7: <h2>Index</h2> 8: <form action="@Url.Action("Upload")" method="post" enctype="multipart/form-data"> 9:  10: <label for="file">Filename:</label> 11: <input type="file" name="file" id="file1" /> 12: <br /> 13: <label for="file">Filename:</label> 14: <input type="file" name="file" id="file2" /> 15: <br /> 16: <label for="file">Filename:</label> 17: <input type="file" name="file" id="file3" /> 18: <br /> 19: <label for="file">Filename:</label> 20: <input type="file" name="file" id="file4" /> 21: <br /> 22: <input type="submit" value="Submit" /> 23: 24: </form> 25:  26: @foreach (var item in Model) { 27:  28: <img src="@item" alt="Alternate text"/> 29: } 3.2 We need a controller to receive the post. Notice the “containername” string I send to the blobhandler. I use this as a folder for the pictures for each user. If this is not a requirement you could just call it container or anything with small characters directly when creating the container. 1: public ActionResult Upload(IEnumerable<HttpPostedFileBase> file) 2: { 3: BlobHandler bh = new BlobHandler("containername"); 4: bh.Upload(file); 5: var blobUris=bh.GetBlobs(); 6: 7: return RedirectToAction("Index",blobUris); 8: } 3.3 The handler model. I’ll let the comments speak for themselves. 1: public class BlobHandler 2: { 3: // Retrieve storage account from connection string. 4: CloudStorageAccount storageAccount = CloudStorageAccount.Parse( 5: CloudConfigurationManager.GetSetting("StorageConnectionString")); 6: 7: private string imageDirecoryUrl; 8: 9: /// <summary> 10: /// Receives the users Id for where the pictures are and creates 11: /// a blob storage with that name if it does not exist. 12: /// </summary> 13: /// <param name="imageDirecoryUrl"></param> 14: public BlobHandler(string imageDirecoryUrl) 15: { 16: this.imageDirecoryUrl = imageDirecoryUrl; 17: // Create the blob client. 18: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 19: 20: // Retrieve a reference to a container. 21: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 22: 23: // Create the container if it doesn't already exist. 24: container.CreateIfNotExists(); 25: 26: //Make available to everyone 27: container.SetPermissions( 28: new BlobContainerPermissions 29: { 30: PublicAccess = BlobContainerPublicAccessType.Blob 31: }); 32: } 33: 34: public void Upload(IEnumerable<HttpPostedFileBase> file) 35: { 36: // Create the blob client. 37: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 38: 39: // Retrieve a reference to a container. 40: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 41: 42: if (file != null) 43: { 44: foreach (var f in file) 45: { 46: if (f != null) 47: { 48: CloudBlockBlob blockBlob = container.GetBlockBlobReference(f.FileName); 49: blockBlob.UploadFromStream(f.InputStream); 50: } 51: } 52: } 53: } 54: 55: public List<string> GetBlobs() 56: { 57: // Create the blob client. 58: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 59: 60: // Retrieve reference to a previously created container. 61: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 62: 63: List<string> blobs = new List<string>(); 64: 65: // Loop over blobs within the container and output the URI to each of them 66: foreach (var blobItem in container.ListBlobs()) 67: blobs.Add(blobItem.Uri.ToString()); 68: 69: return blobs; 70: } 71: } 3.4 So, when the files have been uploaded we will get them to present them to out user in the index page. Pretty straight forward. In this example we only present the image by sending the Uri’s to the view. A better way would be to save them up in a view model containing URI, metadata, alternate text, and other relevant information but for this example this is all we need.   4. Now press F5 in your solution to try it out. You can see the storage emulator UI here:     4.1 If you get any exceptions or errors I suggest to first check if the service Is running correctly. I had problem with this and they seemed related to the installation and a reboot fixed my problems.     5. Set up for Cloud storage. To do this we need to add configuration for cloud just as we did for local in step one. 5.1 We need our keys to do this. Go to the windows Azure menagement portal, select storage icon to the right and click “Manage keys”. (Image from a different blog post though).   5.2 Do as in step 1.but replace step 1.6 with: 1.6 Choose “Manually entered credentials”. Enter your account name. 1.7 Paste your Account Key from step 5.1. and click ok.   5.3. Save, publish and run! Please feel free to ask any questions using the comments form at the bottom of this page. I will get back to you to help you solve any questions. Our consultancy agency also provides services in the Nordic regions if you would like any further support.

    Read the article

  • SQL SERVER – Using expressor Composite Types to Enforce Business Rules

    - by pinaldave
    One of the features that distinguish the expressor Data Integration Platform from other products in the data integration space is its concept of composite types, which provide an effective and easily reusable way to clearly define the structure and characteristics of data within your application.  An important feature of the composite type approach is that it allows you to easily adjust the content of a record to its ultimate purpose.  For example, a record used to update a row in a database table is easily defined to include only the minimum set of columns, that is, a value for the key column and values for only those columns that need to be updated. Much like a class in higher level programming languages, you can also use the composite type as a way to enforce business rules onto your data by encapsulating a datum’s name, data type, and constraints (for example, maximum, minimum, or acceptable values) as a single entity, which ensures that your data can not assume an invalid value.  To what extent you use this functionality is a decision you make when designing your application; the expressor design paradigm does not force this approach on you. Let’s take a look at how these features are used.  Suppose you want to create a group of applications that maintain the employee table in your human resources database. Your table might have a structure similar to the HumanResources.Employee table in the AdventureWorks database.  This table includes two columns, EmployeID and rowguid, that are maintained by the relational database management system; you cannot provide values for these columns when inserting new rows into the table. Additionally, there are columns such as VacationHours and SickLeaveHours that you might choose to update for all employees on a monthly basis, which justifies creation of a dedicated application. By creating distinct composite types for the read, insert and update operations against this table, you can more easily manage this table’s content. When developing this application within expressor Studio, your first task is to create a schema artifact for the database table.  This process is completely driven by a wizard, only requiring that you select the desired database schema and table.  The resulting schema artifact defines the mapping of result set records to a record within the expressor data integration application.  The structure of the record within the expressor application is a composite type that is given the default name CompositeType1.  As you can see in the following figure, all columns from the table are included in the result set and mapped to an identically named attribute in the default composite type. If you are developing an application that needs to read this table, perhaps to prepare a year-end report of employees by department, you would probably not be interested in the data in the rowguid and ModifiedDate columns.  A typical approach would be to drop this unwanted data in a downstream operator.  But using an alternative composite type provides a better approach in which the unwanted data never enters your application. While working in expressor  Studio’s schema editor, simply create a second composite type within the same schema artifact, which you could name ReadTable, and remove the attributes corresponding to the unwanted columns. The value of an alternative composite type is even more apparent when you want to insert into or update the table.  In the composite type used to insert rows, remove the attributes corresponding to the EmployeeID primary key and rowguid uniqueidentifier columns since these values are provided by the relational database management system. And to update just the VacationHours and SickLeaveHours columns, use a composite type that includes only the attributes corresponding to the EmployeeID, VacationHours, SickLeaveHours and ModifiedDate columns. By specifying this schema artifact and composite type in a Write Table operator, your upstream application need only deal with the four required attributes and there is no risk of unintentionally overwriting a value in a column that does not need to be updated. Now, what about the option to use the composite type to enforce business rules?  If you review the composition of the default composite type CompositeType1, you will note that the constraints defined for many of the attributes mirror the table column specifications.  For example, the maximum number of characters in the NationaIDNumber, LoginID and Title attributes is equivalent to the maximum width of the target column, and the size of the MaritalStatus and Gender attributes is limited to a single character as required by the table column definition.  If your application code leads to a violation of these constraints, an error will be raised.  The expressor design paradigm then allows you to handle the error in a way suitable for your application.  For example, a string value could be truncated or a numeric value could be rounded. Moreover, you have the option of specifying additional constraints that support business rules unrelated to the table definition. Let’s assume that the only acceptable values for marital status are S, M, and D.  Within the schema editor, double-click on the MaritalStatus attribute to open the Edit Attribute window.  Then click the Allowed Values checkbox and enter the acceptable values into the Constraint Value text box. The schema editor is updated accordingly. There is one more option that the expressor semantic type paradigm supports.  Since the MaritalStatus attribute now clearly specifies how this type of information should be represented (a single character limited to S, M or D), you can convert this attribute definition into a shared type, which will allow you to quickly incorporate this definition into another composite type or into the description of an output record from a transform operator. Again, double-click on the MaritalStatus attribute and in the Edit Attribute window, click Convert, which opens the Share Local Semantic Type window that you use to name this shared type.  There’s no requirement that you give the shared type the same name as the attribute from which it was derived.  You should supply a name that makes it obvious what the shared type represents. In this posting, I’ve overviewed the expressor semantic type paradigm and shown how it can be used to make your application development process more productive.  The beauty of this feature is that you choose when and to what extent you utilize the functionality, but I’m certain that if you opt to follow this approach your efforts will become more efficient and your work will progress more quickly.  As always, I encourage you to download and evaluate expressor Studio for your current and future data integration needs. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: CodeProject, Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • New ways for backup, recovery and restore of Essbase Block Storage databases – part 2 by Bernhard Kinkel

    - by Alexandra Georgescu
    After discussing in the first part of this article new options in Essbase for the general backup and restore, this second part will deal with the also rather new feature of Transaction Logging and Replay, which was released in version 11.1, enhancing existing restore options. Tip: Transaction logging and replay cannot be used for aggregate storage databases. Please refer to the Oracle Hyperion Enterprise Performance Management System Backup and Recovery Guide (rel. 11.1.2.1). Even if backups are done on a regular, frequent base, subsequent data entries, loads or calculations would not be reflected in a restored database. Activating Transaction Logging could fill that gap and provides you with an option to capture these post-backup transactions for later replay. The following table shows, which are the transactions that could be logged when Transaction Logging is enabled: In order to activate its usage, corresponding statements could be added to the Essbase.cfg file, using the TRANSACTIONLOGLOCATION command. The complete syntax reads: TRANSACTIONLOGLOCATION [ appname [ dbname]] LOGLOCATION NATIVE ?ENABLE | DISABLE Where appname and dbname are optional parameters giving you the chance in combination with the ENABLE or DISABLE command to set Transaction Logging for certain applications or databases or to exclude them from being logged. If only an appname is specified, the setting applies to all databases in that particular application. If appname and dbname are not defined, all applications and databases would be covered. LOGLOCATION specifies the directory to which the log is written, e.g. D:\temp\trlogs. This directory must already exist or needs to be created before using it for log information being written to it. NATIVE is a reserved keyword that shouldn’t be changed. The following example shows how to first enable logging on a more general level for all databases in the application Sample, followed by a disabling statement on a more granular level for only the Basic database in application Sample, hence excluding it from being logged. TRANSACTIONLOGLOCATION Sample Hyperion/trlog/Sample NATIVE ENABLE TRANSACTIONLOGLOCATION Sample Basic Hyperion/trlog/Sample NATIVE DISABLE Tip: After applying changes to the configuration file you must restart the Essbase server in order to initialize the settings. A maybe required replay of logged transactions after restoring a database can be done only by administrators. The following options are available: In Administration Services selecting Replay Transactions on the right-click menu on the database: Here you can select to replay transactions logged after the last replay request was originally executed or after the time of the last restored backup (whichever occurred later) or transactions logged after a specified time. Or you can replay transactions selectively based on a range of sequence IDs, which can be accessed using Display Transactions on the right-click menu on the database: These sequence ID s (0, 1, 2 … 7 in the screenshot below) are assigned to each logged transaction, indicating the order in which the transaction was performed. This helps to ensure the integrity of the restored data after a replay, as the replay of transactions is enforced in the same order in which they were originally performed. So for example a calculation originally run after a data load cannot be replayed before having replayed the data load first. After a transaction is replayed, you can replay only transactions with a greater sequence ID. For example, replaying the transaction with sequence ID of 4 includes all preceding transactions, while afterwards you can only replay transactions with a sequence ID of 5 or greater. Tip: After restoring a database from a backup you should always completely replay all logged transactions, which were executed after the backup, before executing new transactions. But not only the transaction information itself needs to be logged and stored in a specified directory as described above. During transaction logging, Essbase also creates archive copies of data load and rules files in the following default directory: ARBORPATH/app/appname/dbname/Replay These files are then used during the replay of a logged transaction. By default Essbase archives only data load and rules files for client data loads, but in order to specify the type of data to archive when logging transactions you can use the command TRANSACTIONLOGDATALOADARCHIVE as an additional entry in the Essbase.cfg file. The syntax for the statement is: TRANSACTIONLOGDATALOADARCHIVE [appname [dbname]] [OPTION] While to the [appname [dbname]] argument the same applies like before for TRANSACTIONLOGLOCATION, the valid values for the OPTION argument are the following: Make the respective setting for which files copies should be logged, considering from which location transactions are usually taking place. Selecting the NONE option prevents Essbase from saving the respective files and the data load cannot be replayed. In this case you must first manually load the data before you can replay the transactions. Tip: If you use server or SQL data and the data and rules files are not archived in the Replay directory (for example, you did not use the SERVER or SERVER_CLIENT option), Essbase replays the data that is actually in the data source at the moment of the replay, which may or may not be the data that was originally loaded. You can find more detailed information in the following documents: Oracle Hyperion Enterprise Performance Management System Backup and Recovery Guide (rel. 11.1.2.1) Oracle Essbase Online Documentation (rel. 11.1.2.1)) Enterprise Performance Management System Documentation (including previous releases) Or on the Oracle Technology Network. If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations. You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here; (please make sure to select your country/region at the top of this page) or in the OU Learning paths section, where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: bernhard.kinkel@oracle.com. About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis. Disclaimer: All methods and features mentioned in this article must be considered and tested carefully related to your environment, processes and requirements. As guidance please always refer to the available software documentation. This article does not recommend or advise any explicit action or change, hence the author cannot be held responsible for any consequences due to the use or implementation of these features.

    Read the article

  • Creating a bundle - What's going wrong?

    - by joeylange
    Hello everyone, I've got a relatively simple one here. I'm making bundles to live inside the Resources folder of my application (and possibly in the Application Support folder). These bundles will contain template information for the data the application handles. I've constructed a bundle, with extension "booksprintstyle", and the directory structure is up to spec. I have an Info.plist all set and I think I've filled in all the values I need to. Do I need to change something in my App to have these folders-with-extensions recognized as bundle files, or am I missing something in my bundle structure? I noticed that some bundles have a file called PkgInfo; is that important? Below is the Info.plist from my bundle. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>CFBundleDevelopmentRegion</key> <string>English</string> <key>CFBundleGetInfoString</key> <string>1.0, Copyright © 2009 Joey Lange</string> <key>CFBundleIdentifier</key> <string>net.atherial.books.exporter.printingpress.printstyle</string> <key>CFBundleInfoDictionaryVersion</key> <string>6.0</string> <key>CFBundleName</key> <string>Books Print Style - Generic</string> <key>CFBundlePackageType</key> <string>BNDL</string> <key>CFBundleShortVersionString</key> <string>1.0</string> <key>CFBundleSignature</key> <string>????</string> <key>CFBundleDisplayName</key> <string>Books Print Style - Generic</string> <key>NSHumanReadableCopyright</key> <string>Copyright © 2009 Joey Lange</string> <key>CFBundleVersion</key> <string>1.0</string> </dict> </plist>

    Read the article

< Previous Page | 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196  | Next Page >