Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 1476/1620 | < Previous Page | 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483  | Next Page >

  • sqlite is must for merb ?????

    - by mayank
    Hello All, I have a doubt regarding merb dependency with sqlite. I am going to install merb on my m/c and I don't have installed sqlite on my m/c . I tried this command "gem install merb" and faced following error. If is there any way to install merb with mysql please tell me. Thanks Mayank Building native extensions. This could take a while... ERROR: Error installing merb: ERROR: Failed to build gem native extension. /usr/bin/ruby1.8 extconf.rb checking for sqlite3.h... no * extconf.rb failed * Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.8 --with-sqlite3-dir --without-sqlite3-dir --with-sqlite3-include --without-sqlite3-include=${sqlite3-dir}/include --with-sqlite3-lib --without-sqlite3-lib=${sqlite3-dir}/lib Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/do_sqlite3-0.10.2 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/do_sqlite3-0.10.2/ext/do_sqlite3/gem_make.out

    Read the article

  • Business Layer Pattern on Rails? MVCL

    - by Fabiano PS
    That is a broad question, and I appreciate no short/dumb asnwers like: "Oh that is the model job, this quest is retarded (period)" PROBLEM Where I work at people created a system over 2 years for managing the manufacture process over demand in the most simplified still broad as possible, involving selling, buying, assemble, The system is coded over Ruby On Rails. The result has been changed lots of times and the result is a mess on callbacks (some are called several times), 200+ models, and fat controllers: Total bad. The QUESTION is, if there is a gem, or pattern designed to handle Rails large app logic? The logic whould be able to fully talk to models (whose only concern would be data format handling and validation) What I EXPECT is to reduce complexity from various controllers, and hard to track callbacks into files with the responsibility to handle a business operation logic. In some cases there is the need to wait for a response, in others, only validation of the input is enough and a bg process would take place. ie: -- Sell some products (need to wait the operation to finish) 1. Set a View able to get the products input 2. Controller gets the product list inputed by employee and call the logic Logic::ExecuteWithResponse('sell', 'products', :prods => @product_list_with_qtt, :when => @date, :employee => current_user() ) This Logic would handle buying order, assemble order, machine schedule, warehouse reservation, and others

    Read the article

  • Getting Visual Studio macros in console app

    - by Paul Steckler
    In a Visual Studio extension, you can get the default include paths for all projects with C# code like: String dirs = dte2.get_Properties("Projects", "VCDirectories"); where dte2 is the Visual Studio application object. Usually, those directories contain macros like $(INCLUDE). You can expand those macros by looking at dte2.Solution.Projects, finding the relevant project in that collection; from the project, look at project.Configurations, find the relevant configuration, and call its Evaluate method. In VS2005/VS2008, there's a .vssettings file that contains the VCDirectories. In VS2010, there's a property sheet with the same information. A console application can just parse those files -- great. But how can you expand the macros? As a first step, I tried instantiating a VCProjectEngine object in a console app, but that just resulted in a COM failure. So I don't know how to instantiate a VCProject object in order to follow the same strategy I used in a VS extension. Where are the macro bindings stored?

    Read the article

  • Implementing the ‘defer’ statement from Go in Objective-C?

    - by zoul
    Hello! Today I read about the defer statement in the Go language: A defer statement pushes a function call onto a list. The list of saved calls is executed after the surrounding function returns. Defer is commonly used to simplify functions that perform various clean-up actions. I thought it would be fun to implement something like this in Objective-C. Do you have some idea how to do it? I thought about dispatch finalizers, autoreleased objects and C++ destructors. Autoreleased objects: @interface Defer : NSObject {} + (id) withCode: (dispatch_block_t) block; @end @implementation Defer - (void) dealloc { block(); [super dealloc]; } @end #define defer(__x) [Defer withCode:^{__x}] - (void) function { defer(NSLog(@"Done")); … } Autoreleased objects seem like the only solution that would last at least to the end of the function, as the other solutions would trigger when the current scope ends. On the other hand they could stay in the memory much longer, which would be asking for trouble. Dispatch finalizers were my first thought, because blocks live on the stack and therefore I could easily make something execute when the stack unrolls. But after a peek in the documentation it doesn’t look like I can attach a simple “destructor” function to a block, can I? C++ destructors are about the same thing, I would create a stack-based object with a block to be executed when the destructor runs. This would have the ugly disadvantage of turning the plain .m files into Objective-C++? I don’t really think about using this stuff in production, I’m just interested in various solutions. Can you come up with something working, without obvious disadvantages? Both scope-based and function-based solutions would be interesting.

    Read the article

  • What do I need to do to make a WPF Browser Application (XBAP) that requires Full Trust work on Windo

    - by Benoit J. Girard
    So this is a Visual Studio 2008, .NET, WPF, XBAP, Windows 7 question, regarding .NET trust policies. At work, we have several Web Browser Applications (.XBAP files) developed with Visual Studio 2008 (so .NET 3.5) that we deployed internally. These required a .NET FullTrust policy, we found a way to make a .MSI that adjusted the policy on individual stations, everything worked great. Users love in-browser apps. This was last year and on Windows XP. This year our company started upgrading users to Windows 7, and now none of our Web Browser Applications work. The error message is "Trust Not Granted", as if the policy-changing .MSI had not been run. Other details: I can confirm that our apps work on Windows XP for Internet Explorer 7 and Firefox, and do not work on Windows 7 for Internet Explorer 8 nor Firefox. I must admit that .NET security policies mystify me. Still, I could not find any mention of this problem on the Net at large or on this site. Did anybody else encounter this problem? Any and all help welcome.

    Read the article

  • Getting text between quotes using regular expression

    - by Camsoft
    I'm having some issues with a regular expression I'm creating. I need a regex to match against the following examples and then sub match on the first quoted string: Input strings ("Lorem ipsum dolor sit amet, consectetur adipiscing elit.") ('Lorem ipsum dolor sit amet, consectetur adipiscing elit. ') ('Lorem ipsum dolor sit amet, consectetur adipiscing elit. ', 'arg1', "arg2") Must sub match Lorem ipsum dolor sit amet, consectetur adipiscing elit. Regex so far: \((["'])([^"']+)\1,?.*\) The regex does a sub match on the text between the first set of quotes and returns the sub match displayed above. This is almost working perfectly, but the problem I have is that if the quoted string contains quotes in the text the sub match stops at the first instance, see below: Failing input strings ("Lorem ipsum dolor \"sit\" amet, consectetur adipiscing elit.") Only sub matches: Lorem ipsum dolor ("Lorem ipsum dolor 'sit' amet, consectetur adipiscing elit.") The entire match fails. Notes The input strings are actually php code function calls. I'm writing a script that will scan .php source files for a specific function and grab the text from the first parameter.

    Read the article

  • 32 bit depth jpg images problem in IE when referenced locally

    - by Stefan
    We have an webbapplication that takes an image that will be uploaded and resized. The resize-library we used saved all pictures with 32-bit depth whatever the depth was before. We have an online client that can view the pictures via an html-file and all is fine there. All pictures are shown correctly. The problem: We also have an vb-winform application that download the pictures and show them in an html-file locally in an webbrowser control. But here all pictures are rejected (not rendered), just the red cross. If we create an static html-file with img-tags in them locally, its the same. All pictures that has 32-bits depth are shown as red crosses. If we resave the pictures with 24-bits depth it magically works again. So ofcourse that was our "workaround", let the resize-library save all pictures with 24-bits depth instead. Summary: 32-bits jpg files shows correct in IE when online but not when referenced locally in a local html-file. (This is true for IE8 on both winxp and windows7). The same local html-file opened in mozilla showed OK. Question: I have googled this a lot but has not found anything about this "problem". Is this a bug in IE8?

    Read the article

  • script to find "deny" ACE in ACLs, and remove it

    - by Tom
    On my 100TB cluster, I need to find dirs and files that have a "deny" ACE within their ACL, then remove that ACE on each instance. I'm using the following: # find . -print0 | xargs -0 ls -led | grep deny -B4 and get this output (partial, for example only) -r--rw---- 1 chris GroupOne 4096 Mar 6 18:12 ./directoryA/fileX.txt OWNER: user:chris GROUP: group:GroupOne 0: user:chris allow file_gen_read,std_write_dac,file_write_attr 1: user:chris deny file_write,append,file_write_ext_attr,execute -- -r--rwxrwx 1 chris GroupOne 14728221 Mar 6 18:12 ./directoryA/subdirA/fileZ.txt OWNER: user:chris GROUP: group:GroupOne 0: user:chris allow file_gen_read,std_write_dac,file_write_attr 1: user:chris deny file_write,append,file_write_ext_attr,execute -- OWNER: user:bob GROUP: group:GroupTwo 0: user:bob allow dir_gen_read,dir_gen_write,dir_gen_execute,std_write_dac,delete_child,object_inherit,container_inherit 1: group:GroupTwo allow std_read_dac,std_write_dac,std_synchronize,dir_read_attr,dir_write_attr,object_inherit,container_inherit 2: group:GroupTwo deny list,add_file,add_subdir,dir_read_ext_attr,dir_write_ext_attr,traverse,delete_child,object_inherit,container_inherit -- As you can see, depending on where the "deny" ACE is, I can see/not-see the path. I could increase the -B value (I've seen up to 8 ACEs on a file) but then I would get more output to distill from... What I need to do next is extract $ACENUMBER and $PATHTOFILE so that I can execute this command: chmod -a# $ACENUMBER $PATHTOFILE Additional issue is that the find command (above) gives a relative path, whereas I need the full path. I guess that would need to be edited somehow. Any guidance on how to accomplish this?

    Read the article

  • Storing URLs while Spidering

    - by itemio
    I created a little web spider in python which I'm using to collect URLs. I'm not interested in the content. Right now I'm keeping all the visited URLs in a set in memory, because I don't want my spider to visit URLs twice. Of course that's a very limited way of accomplishing this. So what's the best way to keep track of my visited URLs? Should I use a database? * which one? MySQL, sqlite, postgre? * how should I save the URLs? As a primary key trying to insert every URL before visiting it? Or should I write them to a file? * one file? * multiple files? how should I design the file-structure? I'm sure there are books and a lot of papers on this or similar topics. Can you give me some advice what I should read?

    Read the article

  • Reset.css and then a Set.css

    - by Sixfoot Studio
    I have, for a while now been using a reset.css file to reset everything before I start laying out my html designs. The reset is great in that it allows one to better control attributes such as margins, padding, line-height etc for all browsers. In essence the flatliner of css files. Now to get the heart beating again, I need a "set.css" file. So what I have done is created an Html file with all the possible elements on the page to then go and set the padding, margins etc of the h1, h2, p, td etc. I need some help with this as I am not sure what the defaults normally are. I had a look at the Firefox default css file that's used to generate all these attributes on a raw html file but it doesn't cover all the scenarios I could come up with when developing a site. Here's an example of the set.html file (a work in progress) which can be used as a lorem ipsum filler to add to your first page in a cms and then to style with a "set.css" file http://www.sixfoot.co.za/labs/Html-Css/set.html I'd appreciate it if someone knows if something like a set.css file exists or if someone could tell me what the general padding and margins are in cases like this when you have reset the css. Cheers, James

    Read the article

  • How to Verify Signature, Loading PUBLIC KEY From PEM file?

    - by bbirtle
    I'm posting this in the hope it saves somebody else the hours I lost on this really stupid problem involving converting formats of public keys. If anybody sees a simpler solution or a problem, please let me know! The eCommerce system I'm using sends me some data along with a signature. They also give me their public key in .pem format. The .pem file looks like this: -----BEGIN PUBLIC KEY----- MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDe+hkicNP7ROHUssGNtHwiT2Ew HFrSk/qwrcq8v5metRtTTFPE/nmzSkRnTs3GMpi57rBdxBBJW5W9cpNyGUh0jNXc VrOSClpD5Ri2hER/GcNrxVRP7RlWOqB1C03q4QYmwjHZ+zlM4OUhCCAtSWflB4wC Ka1g88CjFwRw/PB9kwIDAQAB -----END PUBLIC KEY----- Here's the magic code to turn the above into an "RSACryptoServiceProvider" which is capable of verifying the signature. Uses the BouncyCastle library, since .NET apparently (and appallingly cannot do it without some major headaches involving certificate files): RSACryptoServiceProvider thingee; using (var reader = File.OpenText(@"c:\pemfile.pem")) { var x = new PemReader(reader); var y = (RsaKeyParameters)x.ReadObject(); thingee = (RSACryptoServiceProvider)RSACryptoServiceProvider.Create(); var pa = new RSAParameters(); pa.Modulus = y.Modulus.ToByteArray(); pa.Exponent = y.Exponent.ToByteArray(); thingee.ImportParameters(pa); } And then the code to actually verify the signature: var signature = ... //reads from the packet sent by the eCommerce system var data = ... //reads from the packet sent by the eCommerce system var sha = new SHA1CryptoServiceProvider(); byte[] hash = sha.ComputeHash(Encoding.ASCII.GetBytes(data)); byte[] bSignature = Convert.FromBase64String(signature); ///Verify signature, FINALLY: var hasValidSig = thingee.VerifyHash(hash, CryptoConfig.MapNameToOID("SHA1"), bSignature);

    Read the article

  • Remove parent of matched locator

    - by Ilan
    Is there a way to locate a node based child properties? I need to run a web.config transform to remove the 2nd <dependentAssembly in the following: <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <!-- Don't want to delete this one --> <dependentAssembly> <assemblyIdentity name="System.Web.Helpers" publicKeyToken="31bf3856ad364e35"/> <bindingRedirect oldVersion="1.0.0.0-2.0.0.0" newVersion="2.0.0.0"/> </dependentAssembly> <!-- This is the one I want to delete --> <dependentAssembly> <assemblyIdentity name="Microsoft.VisualStudio.Enterprise.AspNetHelper" publicKeyToken="b03f5f7f11d50a3a" culture="neutral"/> <codeBase version="11.0.0.0" href="file:///C:/Program%20Files%20(x86)/Microsoft%20Visual%20Studio%2011.0/Common7/IDE/PrivateAssemblies/Microsoft.VisualStudio.Enterprise.AspNetHelper.DLL"/> </dependentAssembly> </assemblyBinding> </runtime> Finding the <assemblyIdentity is easy enough, but I need to delete the parent <dependentAssembly (and <codeBase). If there was a "xdt:Transform="RemoveParent" this would do the trick, but AFAIK there isn't. Alternatively if there was a Locator I could use on the <dependentAssembly which would match children, then that could work too.

    Read the article

  • How to preprocess text to do OCR error correction

    - by eaglefarm
    Here is what I'm trying to accomplish: I need to get a several large text files from a computer that is not networked and has no other output except a printer. I tried printing the text, then scanning the printout with OCR to recover the text on another computer but the OCR gets lots of errors (1 vs l, o vs 0, O vs D, etc). To solve this I am thinking of writing a program to process (annotate?) the text file, before printing it, so that the errors can be corrected from the text output of the OCR program. For example, for 1 (number one) vs l (letter L), I could change the text like this: sample inserting \nnn after characters that are frequently wrong in the OCR results: sampl\108e Then I can write another program to examine the file, looking for \nnn and check the character before the \nnn (where nnn is the ascii code in decimal) and fix it if necessary. Of course the program will have to recognize that the \nnn may have errors too but at least it knows that the nnn are digits and can easily correct them. I think I would add a CRC on each line so that any line that isn't corrected perfectly can be flagged as having a problem. Has anyone done anything like this? If there is an existing way of doing this I'd rather not reinvent the wheel. Or any suggestions for annotation format that would help solve this problem would be helpful too.

    Read the article

  • Rendering LaTeX on third-party websites

    - by A. Rex
    There are some sites on the web that render LaTeX into some more readable form, such as Wikipedia, some Wordpress blogs, and MathOverflow. They may use images, MathML, jsMath, or something like that. There are other sites on the web where LaTeX appears inline and is not rendered, such as the arXiv, various math forums, or my email. In fact, it is quite common to see an arXiv paper's abstract with raw LaTeX in it, e.g. this paper. Is there a plugin available for Firefox, or would it be possible to write one, that renders LaTeX within pages that do not provide a rendering mechanism themselves? Some notes: It may be impossible to render some of the code, because authors often copy-paste code directly from their source TeX files, which may contain things like "\cite{foo}" or undefined commands. These should be left alone. This question is a repost of a question from MathOverflow that was closed for not being related to math. I program a lot, but Javascript is not my specialty, so comments along the lines of "look at this library" are not particularly helpful to me (but may be to others).

    Read the article

  • Azure application working on emulator but not on azure cloud

    - by Hisham Riaz
    firstly i am developing my MVC3 application on visual web developer 2010 express, by migrating my MVC3 (cshtml) files on MVC2. it works great on local system using the emulator, but once i deploy the application on azure it gives runtime errors. example: The layout page "~/Views/Shared/test_page.cshtml" could not be found at the following path: "~/Views/Shared/test_page.cshtml". Source Error: Line 8: //Layout = "~/Views/Shared/upload.cshtml"; Line 9: //Layout = "~/Views/Shared/_Layout2.cshtml"; Line 10: Layout = "~/Views/Shared/test_page.cshtml"; Line 11: } Line 12: else CODE IS AS FOLLOWS: _ViewStart.cshtml file @{ string AccId = Request.QueryString["AccId"].ToString(); if (AccId=="0") { //Layout = "~/Views/Shared/upload.cshtml"; //Layout = "~/Views/Shared/_Layout2.cshtml"; Layout = "~/Views/Shared/test_page.cshtml"; } else { string LayOutPagePath = MVCTest.Models.ComponentClass.GetLayOutPagePath(AccId); Layout = LayOutPagePath; } } ......... how ever the page exist, and is working fine on azure emulator, but not in azure cloud. CODE FOR test_page.cshtml @{ var result = "1234567890"; var temp_xml = MVCTest.Models.ComponentClass.GetTemplateAndTheme("1");//returning xml string LayOutPagePath = MVCTest.Models.ComponentClass.GetLayOutPagePath("1");//returning string } @RenderBody() test_page @temp_xml @result @LayOutPagePath

    Read the article

  • Execute Ant task with Maven

    - by Gonzalo
    Hi, I'm trying to execute with Maven some test written using Ant tasks. I generated the files required to import the task into Maven, but I can't execute them. My POM is defined this way: <build> <plugins> <plugin> <artifactId>maven-ant-plugin</artifactId> <version>2.1</version> <executions> <execution> <phase>generate-sources</phase> <configuration> <tasks> <echo message="Hello, maven"/> </tasks> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> </plugin> </plugins> </build> I try to execute that message, but I get an error with run: [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] 'run' was specified in an execution, but not found in the plugin But, if I run: "mvn antrun:run", I know that this can not run the task. An if I've different targets, how do I call them from Maven? I've the pom.xml, and build.xml with the ant tasks. Thanks. Gonzalo

    Read the article

  • How to implement RSA-CBC?(I have uploaded the request document)

    - by tq0fqeu
    I don't konw more about cipher, I just want to implement RSA-CBC which maybe mean that the result of RSA encrypt in CBC mode, and I have implemented RSA. any code languages will be ok, java will be appreciated thx I copy the request as follow(maybe has spelling wrong), but that's French I don't konw that: Pr´esentation du mini-projet Le but du mini-projet est d’impl´ementer une version ´el´ementaire du chi?rement d’un bloc par RSA et d’inclure cette primitive dans un systeme de chi?rement par bloc avec chaˆinage de blocs et IV (Initial Vector ) al´eatoire. Dans ce systeme, un texte clair (`a chi?rer) est d´ecompos´e en blocs de taille t (?x´ee par l’utilisa- teur), chaque bloc (clair) est chi?r´e par RSA en un bloc crypt´e de mˆeme taille, puis le cryptogramme associ´e au texte clair initial est obtenu en chaˆinant les blocs crypt´es par la m´ethode CBC (cipher- block chaining) d´ecrite dans le cours (voir poly “Block Ciphers”) Votre programme devra demander a l’utilisateur la taille t, puis, apres g´en´eration des cl´es publique et priv´ee, lui proposer de chi?rer ou d´echi?rer un (court) ?chier ASCII. Il est indispensable que votre programme soit au moins capable de traiter le cas (tres peu r´ealiste du point de vue de la s´ecurit´e) t = 32. Pour les traiter des blocs plus grands, il vous faudra impl´ementer des routines d’arithm´etique multi-pr´ecision ; pour cela, je vous conseille de faire appela des bibliotheques libres comme GMP (GNU Multiprecision Library). Pour la g´en´eration al´eatoire des nombres premiers p et q, vous pouvez ´egalement faire appela des bibliotheques sp´ecialis´ees,a condition de me donner toutes les pr´ecisions n´ecessaires. Vous devez m’envoyer (avant une date qui reste a ?xer)a mon adresse ´electronique ([email protected]) un courriel (sujet : [MI1-crypto] : devoir, corps du message : les noms des ´etudiants ayant travaill´e sur le mini-projet) auquel sera attach´e un dossier compress´e regroupant vos sources C ou Java comment´ees, votre programme ex´ecutable, et un ?chier texte ou PDF donnant toutes les pr´ecisions sur les biblioth`eques utilis´ees, vos choix algorithmiques et d’impl´ementation, et les raisons de ces choix (complexit´e algorithmique, robus- tesse, facilit´e d’impl´ementation, etc.). Vous pouvez travailler en binˆome ou en trinˆome, mais je serai nettement plus exigeant avec les trinˆomes I have uploaded the request at http://uploading.com/files/22emmm6b/enonce_projet.pdf/ thx

    Read the article

  • How to use Svn Version Task to set the Version of a vb project

    - by SchlaWiener
    I have a Visual Studio 2008 Solution where the main output exe is a VB.Net Winforms exe which has several VB.Net and C# dll's linked from the same solution. The whole solution is under version control with subversion. Now I want to automagically update by generated files with the current svn revision number. For this purpose I found this neat project: http://svnversiontasks.codeplex.com/ You also need the MSBuild.Communuity.Tasks for this to work. There was a msbuild example on how to update the rev number for every single project in your solution which I use: <Import Project="$(MSBuildExtensionsPath)\SvnTools.Targets\SvnTools.Tasks.VersionManagement.Tasks" /> <Target Name="build"> <CreateItem Include="../**/AssemblyInfo.vb;../**/AssemblyInfo.cs;../**/Properties/AssemblyInfo.cs"> <Output TaskParameter="Include" ItemName="AssemblyInfoFiles" /> </CreateItem> <CreateItem Include="../**/*.vdproj;*.vdproj"> <Output TaskParameter="Include" ItemName="DeploymentProjectFiles" /> </CreateItem> <UpdateVersion AssemblyInfoFiles="@(AssemblyInfoFiles)" DeploymentProjectFiles="@(DeploymentProjectFiles)" Format="yyyy.mm.dd.rev" /> <Exec Command="&quot;$(VS90COMNTOOLS)..\IDE\devenv&quot; ..\MyApp.sln /build" /> <RevertVersionChange AssemblyInfoFiles="@(AssemblyInfoFiles)" DeploymentProjectFiles="@(DeploymentProjectFiles)" /> </Target> I modified the original file to also include the AssemblyInfo.vb file and saved it as a msbuild.proj file. However if I execute msbuild from the console I see that the C# projects are updated (I can also confirm that from the properties of the output dll but my vb project remains unchanged: Reverting version number change: ../App1\AssemblyInfo.vb Updating version number (to rev 0) for file: ../App1\AssemblyInfo.vb D:\Source\MyApp\MyAppDeploy\MyAppDeploy.csproj : warning : Version attribute not found, file not updated. Reverting version number change: ../App2\Properties\AssemblyInfo.cs Updating version number (to rev 0) for file: ../App2\Properties\AssemblyInfo.cs Successfully updated file. Maybe the task does not support VB.Net. But maybe someone has a solution for this...

    Read the article

  • Symfony file upload - "Array" stored in database instead of the actual filename

    - by Guillaume Flandre
    I'm using Symfony 1.4.4 and Doctrine and I need to upload an image on the server. I've done that hundreds of times without any problem but this time something weird happens : instead of the filename being stored in the database, I find the string "Array". Here's what I'm doing: In my Form: $this->useFields(array('filename')); $this->embedI18n(sfConfig::get('app_cultures')); $this->widgetSchema['filename'] = new sfWidgetFormInputFileEditable(array( 'file_src' => '/uploads/flash/'.$this->getObject()->getFilename(), 'is_image' => true, 'edit_mode' => !$this->isNew(), 'template' => '<div id="">%file%</div><div id=""><h3 class="">change picture</h3>%input%</div>', )); $this->setValidator['filename'] = new sfValidatorFile(array( 'mime_types' => 'web_images', 'path' => sfConfig::get('sf_upload_dir').'/flash', )); In my action: public function executeIndex( sfWebRequest $request ) { $this->flashContents = $this->page->getFlashContents(); $flash = new FlashContent(); $this->flashForm = new FlashContentForm($flash); $this->processFlashContentForm($request, $this->flashForm); } protected function processFlashContentForm($request, $form) { if ( $form->isSubmitted( $request ) ) { $form->bind( $request->getParameter( $form->getName() ), $request->getFiles( $form->getName() ) ); if ( $form->isValid() ) { $form->save(); $this->getUser()->setFlash( 'notice', $form->isNew() ? 'Added.' : 'Updated.' ); $this->redirect( '@home' ); } } } Before binding my parameters, everything's fine, $request->getFiles($form->getName()) returns my files. But afterwards, $form->getValue('filename') returns the string "Array". Did it happen to any of you guys or do you see anything wrong with my code? Edit: I added the fact that I'm embedding another form, which may be the problem (see Form code above).

    Read the article

  • Reloading a JTree during runtime

    - by Patrick Kiernan
    I create a JTree and model for it out in a class separate to the GUI class. The data for the JTree is extracted from a file. Now in the GUI class the user can add files from the file system to an AWT list. After the user clicks on a file in the list I want the JTree to update. The variable name for the JTree is schemaTree. I have the following code for the when an item in the list is selected: private void schemaListItemStateChanged(java.awt.event.ItemEvent evt) { int selection = schemaList.getSelectedIndex(); File selectedFile = schemas.get(selection); long fileSize = selectedFile.length(); fileInfoLabel.setText("Size: " + fileSize + " bytes"); schemaParser = new XSDParser(selectedFile.getAbsolutePath()); TreeModel model = schemaParser.generateTreeModel(); schemaTree.setModel(model); } I've updated the code to correspond to the accepted answer. The JTree now updates correctly based on which file I select in the list.

    Read the article

  • WDK build-process hooks: need incremental build with auto-versioning

    - by Mystagogue
    I've previously gotten incremental builds with auto-versioning working in a team build setting for user-mode code, but now I'm dealing with the builds of WDK device drivers. It's a whole new ball-game. I need to know what extension point, or hook, is available in the WDK build that occurs after the driver has been selected to be incrementally built, but before it actually starts building the object files. More specifically, I have a .rc file that contains the version of the device driver. I need to update the version in that file ONLY IF the driver is going to be built anyway. If I bump the value in the .rc file prematurely, it will cause the incremental build to kick-off (that is bad). If I wait too long, then the incremental build won't see that I've changed the .rc file. Either way, I do need the WDK to realize that the new version I've placed into the .rc file needs to be built into a new .res file and linked. How do I do this? What suggested extension points should I play with? Is there a link-tutorial on the WDK build process that is particularly revealing regarding this topic?

    Read the article

  • Which combining css technique?

    - by DotnetShadow
    Hi there, Which of the following would you say is the best way to go when combining files for CSS: Say I have a master.css file that is used across all pages on my website (page1.aspx, page2.aspx) Page1.aspx - A specific page that has some unique css that is only ever used on that page, so I create a page1.css and it also uses another css grids.css Page2.aspx - Another specific page that is different from all other pages on the site and is different to page1.aspx, I'll name this page2.aspx and make a page2.css this doesn't use grids.css So would you combine the scripts as: Option1: Combine scripts csshandler.axd?d=master.css,page1.css,grids.css when visiting page1 Combine scripts csshandler.axd?d=master.css,page2.css when visiting page2 Benefits: Page specific, rendering quicker since only selectors for that page need to be matched up no unused selectors Drawback: Multiple combinations of master.css + page specific hence master.css has to be downloaded for each page Option2: Combine all scripts whether a page needs them or not csshandler.axd?d=master.css,page1.css,page2.css,grids.css (master, page1 and page2) that way it gets cached as one. The problem is that rendering maybe slower since it will have to try and match EVERY selector in the css with selectors on the page even the missing ones, so in the case of page2.aspx that doesn't use grids.css the selectors in grids.css will need to be parsed to see if they are in page2 which means rendering will be slow Benefits: One file will ever be downloaded and cached doesn't matter what page you visit Drawback: Unused selectors will need to be parsed by the browser slower rendering Option3: Leave the master file on it's own and only combine other scripts (the benefit of this is because master is used across all pages there is a chance that this is cached so doesn't need to keep on downloading csshandler.axd?d=Master.css csshandler.axd?d=page1.css,grids.css Benefits: master.css file can be cached doesn't matter what page you visit. Not many unused selectors as page spefic is applied Drawback: Initially minimum of 2 HTTP request will have to be made What do you guys think? Cheers DotnetShadow

    Read the article

  • Red Box is not working

    - by palani
    Hi , I have install the Red box plugin by using the following command script/plugin install svn://rubyforge.org/var/svn/ambroseplugins/redbox . It installed successfully. and again ran the following command the following location /myapp/vendor/plugin/redbox/rake update_scripts . It shows me the following output (in /myapp/vendor/plugins/redbox) rake aborted! private method `copy' called for File:Class /home/myapp/vendor/plugins/redbox/Rakefile:28 (See full trace by running task with --trace) I don't know How to solve this ... Then i understand that "rake update_scripts" copying the Js and Css file only. so i manually copied the Redbox.js & redbox.css files into the respective places under /public folder I include the follwoing into my application.html.erb <%= stylesheet_link_tag 'redbox' % <%= javascript_include_tag :defaults % <%= javascript_include_tag 'redbox' % It included in the page successfully. The following is my view code : <%= link_to_remote_redbox('Red_box', :url = {:action= 'log'} ,:method ='get') % The popup box doesn't appear. I have no clue what is the exact error. Is that any Jquery clash? Please help me

    Read the article

  • VB.Net Validate an xml against a schema (strange problem)

    - by Apeksha
    I have written a small XML validator, that takes in an XML file and an XML schema and validates the XML files against that schema. It works well, except for an XML file, with this content: <?xml version="1.0" encoding="utf-8"?> <xc:program xmlns:xc="http:\\www.something.com\Schema\XC10" xc:version="4.0.22.0" > <xc:namespaceDecls> <xc:namespaceDecl xc:namespaceDeclURI="urn:swift:xsd:abc"> <xc:namespaceDeclPrefix>n</xc:namespaceDeclPrefix> </xc:namespaceDecl> </xc:namespaceDecls> </xc:program> I tried to validate this XML file against a bunch of different schemas. No matter which schema I select, this XML file comes out as valid. What is it that I am missing? Here is the relevant piece of code: 'Create a schema cache and add the given schema to it. Dim schemaCache As New Schema.XmlSchemaSet schemaCache.Add(targetNamespace, schemaFilename) 'Create an XML DOMDocument object. Dim xmlDom As New XmlDocument 'Assign the schema cache to the DOM document. 'schemas collection. xmlDom.Schemas = schemaCache 'Load selected file as the DOM document. xmlDom.Load(xmlFilename) xmlDom.Validate(AddressOf ValidationCallBack)

    Read the article

  • How to import data to SAP

    - by Mehmet AVSAR
    Hi, As a complete stranger in town of SAP, I want to transfer my own application's (mobile salesforce automation) data to SAP. My application has records of customers, stocks, inventory, invoices (and waybills), cheques, payments, collections, stock transfer data etc. I have an additional database which holds matchings of records. ie. A customer with ID 345 in my application has key 120-035-0223 in SAP. Every record, for sure, has to know it's counterpart, including parameters. After searching Google and SAP help site for a day, I covered that it's going to be a bit more pain than I expected. Especially SAP site does not give even a clue on it. Say I couldn't find. We transferred our data to some other ERP systems, some of which wanted XML files, some other exposed their APIs. My point is, is Sql Server's SSIS an option for me? I hope it is, so I can fight on my own territory. Since client requests would vary a lot, I count flexibility as most important criteria. Also, I want to transfer as much data as I could. Any help is appreciated. Regards,

    Read the article

< Previous Page | 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483  | Next Page >