Search Results

Search found 15099 results on 604 pages for 'runtime environment'.

Page 515/604 | < Previous Page | 511 512 513 514 515 516 517 518 519 520 521 522  | Next Page >

  • Run an ActiveX through Web

    - by balexandre
    We have a webpage that works fine on the local computer as it uses a COM Object that is only available in the local computer. the program generates HTML code: <html> <head> <script type="text/javascript"> <!-- function ResizeControl(){Y = document.body.clientHeight;if (Y < 1) {Y = 1}X = document.body.clientWidth;if (X < 1) {X = 1}ActiveX.width = X;ActiveX.height = Y} --> </script> <style type="text/css">html, body { overflow:hidden; } </style> </head> <body OnResize="ResizeControl()" OnLoad="ResizeControl()" leftmargin="0" topmargin="0" rightmargin="0" bottommargin="0"> <object id="ActiveX" classid="CLSID:8EC68701-329D-4567-BCB5-9EE4BA43D358" width="14" height="14"> <PARAM NAME="tabName" VALUE="Complaints"> </object> </body> </html> and shows fine My question is, How can we port this into a web environment? As the Delphi developer has no idea and I'm not a Delphi fellow. I want to be able to use this "webpage" on a web address http://INTRANET/mysite/thispage.html Any idea, any though, any door to open is greatly appreciate :)

    Read the article

  • Convert pre-IEEE-574 C++ floating-point numbers to/from C#

    - by Richard Kucia
    Before .Net, before math coprocessors, before IEEE-574, Microsoft defined a bit pattern for floating-point numbers. Old versions of the C++ compiler happily used that definition. I am writing a C# app that needs to read/write such floating-point numbers in a file. How can I do the conversions between the 2 bit formats? I need conversion methods in both directions. This app is going to run in a PocketPC/WinCE environment. Changing the structure of the file is out-of-scope for this project. Is there a C++ compiler option that instructs it to use the old FP format? That would be ideal. I could then exchange data between the C# code and C++ code by using a null-terminated text string, and the C++ methods would be simple wrappers around sprintf and atof functions. At the very least, I'm hoping someone can reply with the bit definitions for the old FP format, so I can put together a low-level bit manipulation algorithm if necessary. Thanks.

    Read the article

  • Quickest algorithm for finding sets with high intersection

    - by conradlee
    I have a large number of user IDs (integers), potentially millions. These users all belong to various groups (sets of integers), such that there are on the order of 10 million groups. To simplify my example and get to the essence of it, let's assume that all groups contain 20 user IDs (i.e., all integer sets have a cardinality of 100). I want to find all pairs of integer sets that have an intersection of 15 or greater. Should I compare every pair of sets? (If I keep a data structure that maps userIDs to set membership, this would not be necessary.) What is the quickest way to do this? That is, what should my underlying data structure be for representing the integer sets? Sorted sets, unsorted---can hashing somehow help? And what algorithm should I use to compute set intersection)? I prefer answers that relate C/C++ (especially STL), but also any more general, algorithmic insights are welcome. Update Also, note that I will be running this in parallel in a shared memory environment, so ideas that cleanly extend to a parallel solution are preferred.

    Read the article

  • Delphi dbExpress and Interbase: UTF8 migration steps and risks?

    - by mjustin
    Currently, our database uses Win1252 as the only character encoding. We will have to support Unicode in the database tables soon, which means we have to perform this migration for four databases and around 80 Delphi applications which run in-house in a 24/7 environment. Are there recommendations for database migrations to UTF-8 (or UNICODE_FSS) for Delphi applications? Some questions listed below. Many thanks in advance for your answers! are there tools which help with the migration of the existing databases (sizes between 250 MB and 2 GB, no Blob fields), by dumping the data, recreating the database with UNICODE_FSS or UTF-8, and loading the data back? are there known problems with Delphi 2009, dbExpress and Interbase 7.5 related to Unicode character sets? would you recommend to upgrade the databases to Interbase 2009 first? (This upgrade is planned but does not have a high priority) can we simply migrate the database and Delphi will handle the Unicode character sets automatically, or will we have to change all character field types in every Datamodule (dfm and source code) too? which strategy would you recommend to work on the migration in parallel with the normal development and maintenance of the existing application? The application runs in-house so development and database administration is done internally.

    Read the article

  • Send document to printer from web page

    - by FiveTools
    I have a webpage that activates a print job on a printer. This works in the localhost environment but does not work when the application is deployed to the webserver. I'm using the PrintDocument class from the .net System.Drawing.Print namespace. I'm now assuming the printer has to be available to the application on the remote server? Any suggestions on how I would get this to work? PrintDocument pd = new PrintDocument(); PaperSource ps = new PaperSource(); pd.DefaultPageSettings.PaperSize = new System.Drawing.Printing.PaperSize("Custom", 1180, 850); pd.PrintPage += new PrintPageEventHandler (this.pd_PrintPage); // Set your printer's name. Obtain from // System's Printer Dialog Box. pd.PrinterSettings.PrinterName = "Okidata ML 321 Turbo/D (IBM)"; //PrintPreviewDialog dlgPrintPvw = new PrintPreviewDialog(); //dlgPrintPvw.Document = pd; //dlgPrintPvw.Focus(); //dlgPrintPvw.ShowDialog(); pd.Print();

    Read the article

  • Ruby script/console and Ruby script/server using two different DBs?

    - by aronchick
    Has anyone seen where script/console and script/server load two different databases (though both report using the same)? Here's the first output $ script/server => Booting WEBrick => Rails 2.3.5 application starting on http://0.0.0.0:3000 => Call with -d to detach => Ctrl-C to shutdown server [2010-03-21 15:54:05] INFO WEBrick 1.3.1 [2010-03-21 15:54:05] INFO ruby 1.8.7 (2010-01-10) [i386-mingw32] [2010-03-21 15:54:05] INFO WEBrick::HTTPServer#start: pid=7148 port=3000 No errors. I then run my standard code for entering a form - no problems. Checking the Dev Database (.yml at bottom): mysql> select * from books; [...] | 712 | Book | Book Name | 2010-03-21 22:29:22 | 2010-03-21 22:29:22 | [...] 712 rows in set (0.00 sec) The code CLEARLY saved it seconds ago And now here's the output of script/console: $ script/console Loading development environment (Rails 2.3.5) >> Books.all => [] Nothing. Further, upon further inspection, it's using the production database, but I can't figure out why. Any thoughts here? All consoles have been closed and reopened. UPDATE: Requested .yml file (can't see how it'd be helpful (user name and password are all the same for each)) - development: adapter: mysql database: BooksDBdev username: <user name> password: <long string> timeout: 5000 # Warning: The database defined as "test" will be erased and # re-generated from your development database when you run "rake". # Do not set this db to the same as development or production. test: adapter: mysql database: BooksDBtest username: <user name> password: <long string> timeout: 5000 production: adapter: mysql database: BooksDB username: <user name> password: <long string> timeout: 5000

    Read the article

  • Can ElementTree be told to preserve the order of attributes?

    - by dmckee
    I've written a fairly simple filter in python using ElementTree to munge the contexts of some xml files. And it works, more or less. But it reorders the attributes of various tags, and I'd like it to not do that. Does anyone know a switch I can throw to make it keep them in specified order? Context for this I'm working with and on a particle physics tool that has a complex, but oddly limited configuration system based on xml files. Among the many things setup that way are the paths to various static data files. These paths are hardcoded into the existing xml and there are no facilities for setting or varying them based on environment variables, and in our local installation they are necessarily in a different place. This isn't a disaster because the combined source- and build-control tool we're using allows us to shadow certain files with local copies. But even thought the data fields are static the xml isn't, so I've written a script for fixing the paths, but with the attribute rearrangement diffs between the local and master versions are harder to read than necessary. This is my first time taking ElementTree for a spin (and only my fifth or sixth python project) so maybe I'm just doing it wrong. Abstracted for simplicity the code looks like this: tree = elementtree.ElementTree.parse(inputfile) i = tree.getiterator() for e in i: e.text = filter(e.text) tree.write(outputfile) Reasonable or dumb? Related links: How can I get the order of an element attribute list using Python xml.sax? Preserve order of attributes when modifying with minidom

    Read the article

  • Why is OpenSubKey() returning null on my Win 7 64 bit system?

    - by BrMcMullin
    Has anyone seen OpenSubKey() and other Microsoft.Win32 registry functions return null on 64 bit systems when 32 bit registry keys are under Wow6432node in the registry? I'm working on a unit testing framework that makes a call to OpenSubKey() from the .net library. My dev system is a Win 7 64 bit environment with VS 2008 SP1 and the Win 7 SDK installed. The application we're unit testing is a 32 bit application, so the registry is virtualized under HKLM\Software\Wow6432node. When we call: Registry.LocalMachine.OpenSubKey( @"Software\MyCompany\MyApp\" ); Null is returned, however explicitly stating to look here works: Registry.LocalMachine.OpenSubKey( @"Software\Wow6432node\MyCompany\MyApp\" ); From what I understand this function should be agnostic to 32 bit or 64 bit environments and should know to jump to the virtual node. Even stranger is the fact that the exact same call inside a compiled and installed version of our application is running just fine on the same system and is getting the registry keys necessary to run; which are also being placed in HKLM\Software\Wow6432node. Any suggestions? Thanks in advance!

    Read the article

  • What java web application framework to use?

    - by frohiky
    One of the main products of my company is an Oracle Forms (and Reports) based application, that "needs" to be re-written in another technology. Why? Users want a more rich interface experience, and we want, preferably, to reduce costs with an open source application server. For this (HUGE) project, we intend to use a java web application framework, keep these points in mind: We have: hundreds of tables on our database (the ORM must be as flexible as possible); some logic which is (and will still be) based on PL/SQL procedures/functions/packages; a lot of CRUDs (the application itself is of an considerable size); a demand to work with/generate documents and workflows; an intranet based user environment; We want: to offer a RIA interface experience; use (if possible) an open source app server; a rapid (as possible) development framework; a somewhat mature framework with a "wise" roadmap (and a considerable community support); a MVC approach combined with JS or GWT widgets (e.g. Vaadin or SmartGWT); Well, in the past weeks I've read a lot of posts, Q&As on stackoverflow, and much more: Wicket, JSF, Tapestry, Grails, GWT, Struts2, Play, Spring, Seam, Echo, .... the list goes on! I've even researched about Alfresco..! The obvious question: Which one to use? At this time, any insight, recommendation, shared experience, advice will be more then welcome!

    Read the article

  • Can I ignore a SIGFPE resulting from division by zero?

    - by Mikeage
    I have a program which deliberately performs a divide by zero (and stores the result in a volatile variable) in order to halt in certain circumstances. However, I'd like to be able to disable this halting, without changing the macro that performs the division by zero. Is there any way to ignore it? I've tried using #include <signal.h> ... int main(void) { signal(SIGFPE, SIG_IGN); ... } but it still dies with the message "Floating point exception (core dumped)". I don't actually use the value, so I don't really care what's assigned to the variable; 0, random, undefined... EDIT: I know this is not the most portable, but it's intended for an embedded device which runs on many different OSes. The default halt action is to divide by zero; other platforms require different tricks to force a watchdog induced reboot (such as an infinite loop with interrupts disabled). For a PC (linux) test environment, I wanted to disable the halt on division by zero without relying on things like assert.

    Read the article

  • Why is there unreachable code here?

    - by Richard
    I am writing a c# app and want to output error messages to either the console or a messagebox (Depending on the app type: enum AppTypeChoice { Console, Windows } ), and also control wether the app keeps running or not ( bool StopOnError ). I came up with this method that will check all the criteria, but I'm getting an "unreachable code detected" warning. I can't see why! Here is the whole method (Brace yourselves for some hobbyist code!) public void OutputError(string message) { string standardMessage = "Something went WRONG!. [ But I'm not telling you what! ]"; string defaultMsgBoxTitle = "Aaaaarrrggggggggggg!!!!!"; string dosBoxOutput = "\n\n*** " + defaultMsgBoxTitle + " *** \n\n Message was: '" + message + "'\n\n"; AppTypeChoice appType = DataDefs.AppType; DebugLevelChoice level = DataDefs.DebugLevel; // Decide how much info we should give out here... if (level != DebugLevelChoice.None) { // Give some info.... if (appType == AppTypeChoice.Windows) MessageBox.Show(message, defaultMsgBoxTitle, MessageBoxButtons.OK, MessageBoxIcon.Error); else Console.WriteLine(dosBoxOutput); } else { // Be very secretive... if (appType == AppTypeChoice.Windows) MessageBox.Show(standardMessage, defaultMsgBoxTitle, MessageBoxButtons.OK, MessageBoxIcon.Error); else Console.WriteLine(standardMessage); } // Decide if app falls over or not.. if (DataDefs.StopOnError == true) Environment.Exit(0); // UNREACHABLE CODE HERE } Also, while I have your attention, to get the app type, I'm just using a constant at the top of the file (ie. AppTypeChoice.Console in a Console app etc) - is there a better way of doing this (i mean finding out in code if it is a DOS or Windows app)? Also, I noticed that I can use a messagebox with a fully-qualified path in a Console app...How bad is is to do that ( I mean, will I get tarred and feathered when other developers see it?!) Thanks for your help

    Read the article

  • Starting with versioning mysql schemata without overkill. Good solutions?

    - by tharkun
    I've arrived at the point where I realise that I must start versioning my database schemata and changes. I consequently read the existing posts on SO about that topic but I'm not sure how to proceed. I'm basically a one man company and not long ago I didn't even use version control for my code. I'm on a windows environment, using Aptana (IDE) and SVN (with Tortoise). I work on PHP/mysql projects. What's a efficient and sufficient (no overkill) way to version my database schemata? I do have a freelancer or two in some projects but I don't expect a lot of branching and merging going on. So basically I would like to keep track of concurrent schemata to my code revisions. [edit] Momentary solution: for the moment I decided I will just make a schema dump plus one with the necessary initial data whenever I'm going to commit a tag (stable version). That seems to be just enough for me at the current stage.[/edit] [edit2]plus I'm now also using a third file called increments.sql where I put all the changes with dates, etc. to make it easy to trace the change history in one file. from time to time I integrate the changes into the two other files and empty the increments.sql[/edit]

    Read the article

  • Good mobile oriented GWT widget library alternatives

    - by Michael Donohue
    I've been developing a travel planning site - tripgrep.com - which is built on appengine, GWT and smartgwt, among other technologies. It is still early days, and the site is now working well on my development environment, which is either a windows or mac computer. However, I am frequently talking up the website to my friends when we are at a bar or other venue, so I am standing there while they try to access the site via an iPhone, Android or Blackberry - I've witnessed all three. It has been painfully obvious that the browser based frontend takes a long time to download on a mobile device. I am pretty sure this is because of the javascript download for SmartGWT. So, I would like to look at alternatives to SmartGWT. What I like about SmartGWT is that it has a reasonable look and feel out of the box - I don't need to learn any design or css and it has an office application look. This is considerably better than the GWT built-in widgets, which just get a blue border. The better look-and-feel is why I went with SmartGWT early on. However, the slow load times are killing me on these mobile demos. So now I want a fast loading widget alternative that has good look-and-feel out of the box. The features I care about are: tabs, good form layout, Google maps API integration, grid data viewing. If those are all available in a library that loads quickly on a mobile device, then that's the library I want.

    Read the article

  • How to modify/replace option set file when building from command line?

    - by Heinrich Ulbricht
    I build packages from a batch file using commands like: msbuild ..\lib\Package.dproj /target:Build /p:config=%1 The packages' settings are dependent on an option set: <Import Project="..\optionsets\COND_Defined.optset" Condition="'$(Base)'!='' And Exists('..\optionsets\COND_Defined.optset')"/> This option set defines a conditional symbol many of my packages depend on. The file looks like this: <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <PropertyGroup> <DCC_Define>CONDITION;$(DCC_Define)</DCC_Define> </PropertyGroup> <ProjectExtensions> <Borland.Personality>Delphi.Personality.12</Borland.Personality> <Borland.ProjectType>OptionSet</Borland.ProjectType> <BorlandProject> <Delphi.Personality/> </BorlandProject> <ProjectFileVersion>12</ProjectFileVersion> </ProjectExtensions> </Project> Now I need two builds: one with the condition defined and one without. My attack vector would be the option set file. I have some ideas on what to do: write a program which modifies the option set file, run this before batch build fiddle with the project files and modify the option set path to contain an environment variable, then have different option sets in different locations But before starting to reinvent the wheel I'd like to ask how you would tackle this task? Maybe there are already means meant to support such a case (like certain command line switches, things I could configure in Delphi or batch file magic).

    Read the article

  • How do I fix this installation problem with multicore Solr on Ubuntu 10.04?

    - by coleifer
    Following instructions from the two sites below, I've installed Tomcat 6 and Solr 1.4. http://gist.github.com/204638 https://wiki.fourkitchens.com/display/TECH/Solr+1.4+on+Ubuntu+9.10+and+CentOS+5 I have successfully got it up and running on a server running 9.04 with multicore support, but on the 10.04 I can't seem to get it to work. I am able to reach localhost:xxxx/solr/ on the 10.04 box and see a single link to the Solr Admin, but following the link takes me to a 404 page with the following output: /solr/admin/ HTTP Status 404 - missing core name in path The requested resource (missing core name in path) is not available I am also unable to access /solr/site1/ as I would except - it similarly returns a 404. <!-- from /var/solr/solr.xml, site dirs exist --> <cores adminPath="/admin/cores"> <core name="site1" instanceDir="site1" /> <core name="site2" instanceDir="site2" /> </cores> <!-- from /etc/tomcat6/Catalina/localhost/solr.xml --> <Context docBase="/var/solr/solr.war" debug="0" privileged="true" allowLinking="true" crossContext="true"> <Environment name="solr/home" type="java.lang.String" value="/var/solr" override="true" /> </Context>

    Read the article

  • ruby on rails: undefined method "version_requirements' when attempting to start server after new install

    - by ezabak
    Hi there, I had to newly install ruby on rails recently. When I attempted to start the server for a project I had already been working on previous to this new install, I received the following error: $ ruby script/server => Booting WEBrick... ./script/../config/../vendor/rails/railties/lib/rails/gem_dependency.rb:107:in `requirement': undefined method `version_requirements' for #<Gem::Dependency:0xb74bf764> (NoMethodError) from ./script/../config/../vendor/rails/railties/lib/initializer.rb:292:in `check_gem_dependencies' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:292:in `map' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:292:in `check_gem_dependencies' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:165:in `process' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:112:in `send' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:112:in `run' from /media/78C0-455B/bidmc/schedule/config/environment.rb:13 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:521:in `new_constants_in' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/railties/lib/commands/servers/webrick.rb:59 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:521:in `new_constants_in' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/railties/lib/commands/server.rb:49 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `require' from script/server:3 I have the latest versions of ruby, rubygems, and rails. Any suggestions? Thanks.

    Read the article

  • Visual Studio: How to attach a debugger dynamically to a specific process

    - by Jeff Cyr
    I am building an internal dev tool to manage different processes commonly used in our development environment. The tool show the list the monitored processes, indicate their running state and allow to start or stop each process. I'd like to add the functionality of attaching a debugger to a monitored process from my tool instead of going in 'Debug-Attach to process' in visual studio and finding the process. My goal is to have something like Debugger.Launch() that would show a list of the available visual studio. I can't use Debugger.Launch() because it lauches the debugger on the process that make the call. I would need something like Debugger.Launch(processId). Does anyone know how to acheive this functionality? A solution could be to implement a command in each monitored process to call Debugger.Launch() when the command is received from the monitoring tool, but I would prefer something that does not require to modify the code of the monitored processes. Side question: When using Debugger.Launch(), instances of Visual Studio that already have a debugger attached are not listed. Visual Studio is not limited to one attached debugger, you can attach on multiple process when using 'Debug - Attach to process'. Anyone know how to bypass this limitation when using Debugger.Launch() or an alternative?

    Read the article

  • How to organize integrity tests and code unit tests?

    - by karlthorwald
    I have several files with code testing code (which uses a "unittest" class). Later I found it would be nice to test database integrity also. I put this into a separate directory tree. (Things like the keys have correct format, parent and child nodes are pointing correctly and such.) I use the same unittest class for the integrity tests. Now I wonder if it makes really sense to keep this separate. To test the integrity of data I often duplicate parts of code that I use to test the code that handles the data. But it is not the same. The code tests use test databases (that get deleted after each test) and the integrity tests connect to the live data and analyze it. The integrity tests I want to call from cron and send an alarm if something happens in the live database. How would you handle that? Are there standards for such a setup? What is your experience? My tendency is to put everything in the same file, which would result in the code tests also being executed by the cron on the production environment.

    Read the article

  • How to organize live data integrity tests and code unit tests?

    - by karlthorwald
    I have several files with code testing code (which uses a "unittest" class). Later I found it would be nice to test database integrity also. I put this into a separate directory tree. (Things like the keys have correct format, parent and child nodes are pointing correctly and such.) I use the same unittest class for the integrity tests. Now I wonder if it makes really sense to keep this separate. To test the integrity of data I often duplicate parts of code that I use to test the code that handles the data. But it is not the same. The code tests use test databases (that get deleted after each test) and the integrity tests connect to the live data and analyze it. The integrity tests I want to call from cron and send an alarm if something happens in the live database. How would you handle that? Are there standards for such a setup? What is your experience? My tendency is to put everything in the same file, which would result in the code tests also being executed by the cron on the production environment.

    Read the article

  • Rubygems on Debian: Gems won't load (LoadError)

    - by daswerth
    I've installed the development version of Crunchbang, a linux distro based off Debian. I got Ruby and Rubygems installed, but I can't get the gems I've installed to load. Here is a command-line session: $ ruby -v ruby 1.9.1p378 (2010-01-10 revision 26273) [i486-linux] $ gem env RubyGems Environment: - RUBYGEMS VERSION: 1.3.6 - RUBY VERSION: 1.9.1 (2010-01-10 patchlevel 378) [i486-linux] - INSTALLATION DIRECTORY: /usr/lib/ruby1.9.1/gems/1.9.1 - RUBY EXECUTABLE: /usr/bin/ruby1.9.1 - EXECUTABLE DIRECTORY: /usr/bin - RUBYGEMS PLATFORMS: - ruby - x86-linux - GEM PATHS: - /usr/lib/ruby1.9.1/gems/1.9.1 - /home/corey/.gem/ruby/1.9.1 - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - REMOTE SOURCES: - http://rubygems.org/ $ echo $PATH /home/corey/bin:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games:/home/corey/.gem/ruby/1.9.1:/usr/lib/ruby1.9.1/gems/1.9.1 $ gem list -d nokogiri `*** LOCAL GEMS ***` nokogiri (1.4.1) Authors: Aaron Patterson, Mike Dalessio Rubyforge: http://rubyforge.org/projects/nokogiri Homepage: http://nokogiri.org Installed at: /usr/lib/ruby1.9.1/gems/1.9.1 Nokogiri (?) is an HTML, XML, SAX, and Reader parser $ ruby -r rubygems -e "require 'nokogiri'" -e:1:in `require': no such file to load -- nokogiri (LoadError) from -e:1:in `' I've encountered similar problems on Ubuntu before, but they were easy to fix. I can't figure out what's wrong in this particular case, and Google didn't seem to know either. Any help would be greatly appreciated! By the way... this is my first submission to stackoverflow. I hope this question is relevant. :)

    Read the article

  • protecting grails melody with grails filter

    - by batmannavneet
    I have an application where I am using spring security along with grails melody. I am planning to run grails melody in production environment, but don't want visitors to have access to it. How should I achieve that ? I tried creating a filter in grails (just showing the sample of what I am trying, not the actual code)- def filters = { allURIs(uri:'/**') { before = { //... if(request.forwardURI.indexOf("admin") != -1 || request.forwardURI.indexOf("monitoring") != -1) { response.sendError 404 return false } } } } But this doesnt work as the request for "monitoring" doesnt hit this filter. I dont even want the user to know that such a URL exists, so I want to check in the filter that if "monitoring" is the URL, I show the 404 error page. Thats also the reason why I dont want to protect this URL with spring security as it will show "access denied" page. Basically I want the URL to exist but they should be invisible to users. I want the access to be open to only certain IP addresses for these special URLs. On another note, Is it possible to write a grails filter that "acts" before the spring security filter is hit ? I want to be able to do some filtering before I forward requests to spring security. Writing a grails filter like above doesnt help. Spring security filter gets hit first if I access a protected resource and this filter doesn't get called. Thanks

    Read the article

  • MongoDB - proper use of collections?

    - by zmg
    In Mongo my understanding is that you can have databases and collections. I'm working on a social-type app that will have blogs and comments (among other things) and had previously be using MySQL and pretty heavy partitioning in an attempt to limit possible concurrency issues. With MySQL I've stuffed all my user data into a _user database with several tables to further partition the data (blogs, pages, etc). My immediate reaction with Mongo would be to create a 'users' database with one collection per user. In this way user 'zach' blog entries would go into the 'zach' collection with associated comments and such becoming sub-objects in the same collection. Basically like dynamically creating one table per user in MySQL, but apparently without the complexity and limitations that might impose. Of course since I haven't really used Mongo before I'm having trouble gauging the (ahem..) quality of this idea and the potential problems it might cause down the road. I'd like user data to be treated a lot like a users directory in a *nix environment where user created/non-shared (mostly) gets put into one place (currently with MySQL that would be the appname_users as mentioned above). Most of the users data will be specific to the users page(s). Some of the user data which is queried across all site users (searchable user profiles) is currently kept in a separate database/table and I expect things like this could be put into a appname_system database and be broken up into collections and/or application specific databases (appname_profiles). Anyway, since the available documentation on this is currently a little thin and my experience is extremely limited I thought I might find a little guidance from someone with a better working understanding of the system. On the plus side I'd really already been attempting to treat MySQL as a schema-less document-store and doing this with Mongo seems much more intuitive/sane/rational so I'm really looking forward to getting started. Thanks, Zach

    Read the article

  • How to override the default init.tcl

    - by Sean Murphy
    I'm working on a project where I want to make use of TCL as the command interpreter. I have a working c library object which I can load from within the tcl shell but my problem is finding a way to automatically do this while starting a tclsh. Essentially my ultimate goal is to be able to run a script and have it load my library and run some initial startup tcl code before dropping me back to the tclsh command prompt in interactive mode. e.g. tclsh -f myscript.tcl --then-switch-to-interactive or EXPORT TCLINIT=myscript.tcl tclsh The basic goal is to avoid having to distribute tclsh but rather rely in local user installations of tcl. All I would like to distribute is my library, a startup script and a shell command to launch the tclsh with the library preloaded. I've tried using the environment variables TCLINIT and TCL_LIBRARY but they seem to have no effect. The only workable solutions I've found so far are to add "source myscript.tcl" to either the end of /usr/share/tcltk/tcl8.5.init.tcl or ~/.tclshrc However both of these "solutions" are non perfect as they require modification of the default users workspace. It strikes me that there must be a way to handle this in TCL, but my research so far hasn't yielded anything. Does anyone have any suggestions?

    Read the article

  • How do I lookup a JNDI Datasource from outside a web container?

    - by masotime
    I have the following environment set up: Java 1.5 Sun Application Server 8.2 Oracle 10 XE Struts 2 Hibernate I'm interested to know how I can write code for a Java client (i.e. outside of a web application) that can reference the JNDI datasource provided by the application server. The ports for the Sun Application Server are all at their defaults. There is a JNDI datasource named jdbc/xxxx in the server configuration, but I noticed that the Hibernate configuration for the web application uses the name java:comp/env/jdbc/xxxx instead. Most of the examples I've seen so far involve code like Context ctx = new InitialContext(); ctx.lookup("jdbc/xxxx"); But it seems I'm either using the wrong JNDI name, or I need to configure a jndi.properties or other configuration file to correctly point to a listener? I have appserv-rt.jar from the Sun Application Server which has a jndi.properties inside of it, but it does not seem to help. There's a similar question here, but it doesn't give any code / refers to having iBatis obtain the JNDI Datasource automatically: http://stackoverflow.com/questions/39053/accessing-datasource-from-outside-a-web-container-through-jndi

    Read the article

  • Why should I use Entity Framework over Linq2SQL ...

    - by Refracted Paladin
    To be clear, I am not asking for a side by side comparision which has already been asked Ad Nauseum here on SO. I am also Not asking if Linq2Sql is dead as I don't care. What I am asking is this.... I am building internal apps only for a non-profit organization. I am the only developer on staff. We ALWAYS use SQL Server as our Database backend. I design and build the Databases as well. I have used L2S successfully a couple of times already. Taking all this into consideration can someone offer me a compelling reason to use EF instead of L2S? I was at Code Camp this weekend and after an hour long demonstration on EF, all of which I could have done in L2S, I asked this same question. The speakers answer was, "L2S is dead..." Very well then! NOT! (see here) I understand EF is what MS WANTS us to use in the future(see here) and that it offers many more customization options. What I can't figure out is if any of that should, or does, matter for me in this environment. One particular issue we have here is that I inherited the Core App which was built on 4 different SQL Data bases. L2S has great difficulty with this but when I asked the aforementioned speaker if EF would help me in this regard he said "No!"

    Read the article

< Previous Page | 511 512 513 514 515 516 517 518 519 520 521 522  | Next Page >