Search Results

Search found 16628 results on 666 pages for 'setup kit'.

Page 575/666 | < Previous Page | 571 572 573 574 575 576 577 578 579 580 581 582  | Next Page >

  • Why won't my styles show in a Dynamics CRM 4 IFRAME?

    - by Dan Crowther
    I have created a web page (ASP.NET) that includes a stylesheet to mimic Dynamics CRM styles. This is to be used in a CRM IFRAME (within a form). The stylesheet is referenced as follows: <head id="Head1" runat="server"> <link href="Styles.css" rel="stylesheet" type="text/css" /> </head> When I load the page in Visual Studio, all is well. When I load it in CRM, none of the styles are shown and no images are displayed. If I browse directly to the image, I get a 404 error. However, the pages function correctly. I have set read permissions for "Everyone" on the server to see if that was causing a problem but it didn't help. I also tried putting a plain HTML page in the folder and that won't load either - again a 404. The page is installed in the ISV folder ..../isv/MyProject. Can anyone help? EDIT This is on a multi-tenancy system. On my test company (testcompany) if I browse to http://crm/testcompany/isv/MyProject/MyPage.aspx, the page is loaded (without styles and images). If I browse to http://crm/testcompany/isv/MyProject/TestImage.gif, the image is not shown. If I browse to http://crm/isv/MyProject/TestImage.gif, the image is shown. Does this suggest a problem with the server setup and the way CRM messes around with virtual directories?

    Read the article

  • Sphinx - Python modules, classes and functions documentation

    - by user343934
    Hi everyone, I am trying to document my small project through sphinx which im recently trying to get familiar with. I read some tutorials and sphinx documentation but couldn't make it. Setup and configurations are ok! just have problems in using sphinx in a technical way. My table of content should look like this --- Overview .....Contents ----Configuration ....Contents ---- System Requirements .....Contents ---- How to use .....Contents ---- Modules ..... Index ......Display ----Help ......Content Moreover my focus is on Modules with docstrings. Details of Modules are Directory:- c:/wamp/www/project/ ----- Index.py >> Class HtmlTemplate: .... def header(): .... def body(): .... def form(): .... def header(): .... __init_main: ##inline function ----- display.py >> Class MainDisplay: .... def execute(): .... def display(): .... def tree(): .... __init_main: ##inline function My Documentation Directory:- c:/users/abc/Desktop/Documentation/doc/ --- _build --- _static --- _templates --- conf.py --- index.rst I have added Modules directory to the system environment and edited index.rst with following codes just to test Table of content. But i couldn't extract docstring directly Index.rst T-Alignment Documentation The documentation covers general overview of the application covering functionalities and requirements in details. To know how to use application its better to go through the documentation. .. _overview: Overview .. _System Requirement: System Requirement Seq-alignment tools can be used in varied systems base on whether all intermediary applications are available or not like in Windows, Mac, Linux and UNIX. But, it has been tested on the Windows working under a beta version. System Applications Server .. _Configuration:: Configuration Basic steps in configuration involves in following categories Environment variables Apache setting .. _Modules:: Modules How can i continue from here... Moreover, i am just a beginner to sphinx documentation tool I need your suggestions to brings my modules docstring to my documentation page Thanks

    Read the article

  • svnsync loses revision properties although hook installed

    - by roesslerj
    Hello all! I have a pretty weird problem. We have setup an SVN-Mirror via cronjob (because it needs to go from inside to outside of a firewall, so no post-commit-hook possible) and svnsync. We installed a pre-revprop-hook just as told. Everything seems to work fine, except that it doesn't. E.g. when manually executing the script. # svnsync --non-interactive sync file://<path-to-mirror> --source-username <usr> --source-password <pwd> Committed revision 19817. Copied properties for revision 19817. No error, no complaints. But if checking for the revision properties it says: # svnlook info <path-to-mirror> 0 # svn info -r HEAD file://<path-to-mirror> 2>&1 Path: <root-of-mirror> URL: file://<path-to-mirror> Repository Root: file://<path-to-mirror> Repository UUID: <uid> Revision: 19817 Node Kind: directory Last Changed Rev: 19817 So somehow the author and timestamp information gets lost. But we need that information for our internal processes. Since no error or warning is produced I have absolutely no idea even where to start to look. Everything is local (except for the remote master), so there are no server-logs to look at. I also tried to manually recopy via svnsync copy-revprops (http://chestofbooks.com/computers/revision-control/subversion-svn/svnsync-Copy-revprops-Ref-svnsync-C-Copy-revprops.html). It says Copied properties for revision 19885. But when I query them, it's just the same. Any ideas how I could approach that problem, or even better -- how to solve it? Any ideas appreciated.

    Read the article

  • InstallExecuteSequence cache interferes with custom action operation

    - by Dima G
    I need to upgrade a product that could be installed in per-user context to a new version that is always in per-machine context. The requirements are: Whether the old version was installed in a Per-User (no matter who) or Per-Machine context should be completely seamless to an administrator user that performs the upgrade. The MSI upgrade should succeed without the need to know the password of the user that originally installed the previous version of the product in a Per-User context. The installation should be performed from a single .msi file (no setup.exe is allowed). The installation should be able to run in a silent (non-UI) mode. No reboots are allowed during installation. My strategy is to find in the beginning of the installation whether the product is already installed in per-user context, and if so, to transform the registry keys manually to Per-Machine context (I checked: no additional changes such as file system changes etc. are needed except this transform). I figured out how to move all appropriate keys in the registry from the user settings to the machine settings (pre-loaded appropriate user hive in case it didn't appear in HKEY_USERS) and created custom action that does it - and it does work when I run it as a stand-alone executable before running the MSI. The problem, however, is that when Windows Installer runs InstallExecuteSequence it first creates a 'cached product context' for all products. So when my custom action runs in the course of InstallExecuteSequence, this cache has already been created. Thus FindRelatedProducts action that determines if older product with same upgrade code exists looks on that cache and ignores the changes that my custom action applied. If before running the MSI the previous product was in per-user context, FindRelatedProducts will look at the cache and not apply the upgrade and remove the previous version, because the new product is in per-machine context, even though the previous product version is already configured to per-machine context in the registry by that time by my custom action. What can be done to solve this problem without violating the requirements stated above?

    Read the article

  • Cucumber-rails on jruby installs gem into my apps root directory with bundler

    - by brad
    Just installed cucumber 0.7.2 and cucumber-rails 0.3.1 with jruby-1.4.0 on OSX. When I run a bundle install, it places a cucumber-rails directory in my main app with all of the gem code/dependencies. First off, this is definitely not what I want and I'm not sure why this happens for cucumber-rails only. Second, if I delete this folder and just manually install cucumber-rails, when I run script/generate feature blah I get /Users/bradrobertson/.rvm/rubies/jruby-1.4.0/lib/ruby/site_ruby/1.8/rubygems/source_index.rb:344:in `refresh!': source index not created from disk (RuntimeError) from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/rails/vendor_gem_source_index.rb:34:in `refresh!' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/rails/vendor_gem_source_index.rb:29:in `initialize' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/rails/gem_dependency.rb:21:in `new' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/rails/gem_dependency.rb:21:in `add_frozen_gem_path' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/initializer.rb:298:in `add_gem_load_paths' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/initializer.rb:132:in `process' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/initializer.rb:113:in `run' from /Users/bradrobertson/Repos/app/source/trunk/config/environment.rb:13 from /Users/bradrobertson/Repos/app/source/trunk/config/environment.rb:1:in `require' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/commands/generate.rb:1 from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/commands/generate.rb:3:in `require' from script/generate:3 Similarly running rake cucumber I get rake aborted! source index not created from disk So something obviously doesn't work. If I add that cucumber-rails directory back in, then my rake cucumber actually runs. Can someone tell me why it would need to install the gem right in my rails app? I've never seen this before. setup jruby-1.4.0 cucumber-0.7.2 cucumber-rails 0.3.1 bundler 0.9.23 webrat 0.7.1

    Read the article

  • wscript.shell running file with space in path with PHP

    - by ermac2014
    I was trying to use wscript.shell through COM objects with php to pass some cmd commands to cURL library (the DOS version). here is what I use to perform this task: function windExec($cmd,$mode=''){ // Setup the command to run from "run" $cmdline = "cmd /C $cmd"; // set-up the output and mode if ($mode=='FG'){ $outputfile = uniqid(time()) . ".txt"; $cmdline .= " > $outputfile"; $m = true; } else $m = false; // Make a new instance of the COM object $WshShell = new COM("WScript.Shell"); // Make the command window but dont show it. $oExec = $WshShell->Run($cmdline, 0, $m); if ($outputfile){ // Read the tmp file. $retStr = file_get_contents($outputfile); // Delete the temp_file. unlink($outputfile); } else $retStr = ""; return $retStr; } now when I run this function like: windExec("\"C:/Documents and Settings/ermac/Desktop/my project/curl\" http://www.google.com/", 'FG'); curl doesn't run because there is a problem with the path. but when I remove the spaces from the path it works great. windExec("\"C:/curl\" http://www.google.com/", 'FG'); so my question is how can I escape these spaces in wscript.shell commands? is there anyway I can fix this? thanks in advance :)

    Read the article

  • Can I use pdb files to step through a 3rd party assembly?

    - by Pure.Krome
    Hi folks, my friend has made a really helpful class library which I use all the time. I usually use Reflector to see what his code does. What I really wanted to do was to step through his code while I'm debugging. So he gave me his .pdb file. Foo.dll (release configuration, compile) Foo.pdb Now, I'm not sure how I can get it to auto break into his code when it throws an exception (his code, at various points, thorws exceptions .. like A first chance exception of type 'System.Web.HttpException' occurred in Foo.dll ... Can I do this? Do i need to setup something with the Symbol Server settings in Visual Studio ? Do i need to get the dll compiled into Debug Configuration and be passed the .dll and .pdb files? Or (and i'm really afraid of this one) .. do i need to have both the .dll, .pdb AND his source code ... I also had a look at this previous SO question, but it sorta didn't help (but proof I've tried to search before asking a question). Can someone help me please?

    Read the article

  • Loader.php trying to load Doctrine classes, but we use Propel!

    - by kewpiedoll99
    We are finding cases where we get the following 500 error: File xyz.php does not exist or class "xyz" was not found in the file at () in SF_ROOT_DIR/lib/vendor/Zend/Loader.php line 107 ... where xyz == Memcache (when trying to use symfony cc on the command line) or sfDoctrineAdminGenerator (when using an old-ish AdminGenerator-generated CMS page). We use Propel, but Loader.php is trying to load classes used only for Doctrine. Currently I am using a filthy hack where I request Loader.php to check if the file is either of these two cases, and if so simply return rather than trying to load it. Obviously, this is unacceptable longer term. Has anybody encountered this, and how did you solve it? Edited to add: We have: class ProjectConfiguration extends sfProjectConfiguration { public function setup() { // for compatibility / remove and enable only the plugins you want $this->enableAllPluginsExcept(array('sfDoctrinePlugin')); } } And we have a propel.ini file in our top level config directory. This has only started in the past four weeks or so, and we've had a stable build for over a year now. I'm pretty sure Doctrine is totally disabled.

    Read the article

  • Stale connection with Pheanstalk

    - by token47
    I'm using beanstalkd to offload some work to other machines. The setup is a bit unusual, the server is on the internet (public ip) but the consumers are behind adsl lines on some peoples homes. So there is a linux server as client going out through a dynamic ip and connecting to the server to get a job. It's all PHP and I'm using pheanstalk library. Everything runs smoothly for some time, but then the adsl changes the IP (every 24h hours the provider forces a disconnect-reconnect) the client just hangs, never to go out of "reserve". I thought that putting a timeout on the reserve would help it, but it didn't. As it seems, the client issues a command and blocks, it never checks the timeout. It just issues a reserve-with-timeout (instead of a simple reserve) and it is the servers responsibility to return a TIME_OUT as the timeout occurs. The problem is, the connection is broken (but the TCP/IP doesn't know about that yet until any of the sides try to talk to the other side) and if the client blocked reading, it will never return. The library seems to have support for some kind of timeouts locally (for example when trying to connect to server), but it does not seem to contemplate this scenario. How could I detect the stale connection and force a reconnect? Is there some kind of keepalive on the protocol (and on the pheanstalk itself)? Thanks!

    Read the article

  • Adding multiple rss feeds to a script in SCALA InfoChannel Designer 5

    - by godleuf
    Okay, since it is impossible to talk to anyone on the phone or get support through Scala's "forum", I am going to take a shot and see if anyone out there is feeling my pain. I have a client that uses Scala's InfoChannel Designer and Content Manager. I have had to learn this software from scratch and I have to say it hasn't been easy. I think I am at a point where the overall design is set, but I need to implement a couple of things before I can make this happen. RSS feeds are my issue at this point. Multiple RSS feeds to be specific. I need a feed coming in for 3 areas of content: Wiki News (or equivalent), local weather and a stock ticker. I have learned how to setup a "crawl" using a script example available from Scala's file center and copying and pasting into my design. But from what I have learned first hand and through reading through other forums, you can not have a feed from 3 different sources or urls happening simultaneously. Doesn't seem like it would be an issue, but apparently it is. This small step has held up this project for far too long and I need to get it figured out. This doesn't even touch on my issue of feeding in streaming video as a background but I have gone over this in another question but with no luck thus far. If there is ANYONE out there who is in anything similar using this software, your feedback and/or suggestions would be greatly appreciated. Thanks you for allowing me to vent!

    Read the article

  • How does jquery display an image received from an ajax request?

    - by Gnee
    I have this working great, but I'd like a deeper understanding of what is actually going on behind the scenes. I am using Jquery's Ajax method to pull 5 blog posts (returning only the title and first photo). A PHP script grabs the blog posts' title and first photo and sticks it in an array and sends it back to my browser as JSON. Upon receiving the JSON object, Jquery grabs the first member of the JSON object and displays it's title and photo. In a gallery I made, using buttons – the user can iterate the 1-5 posts. So the actual AJAX call happens right away, and only once. I am basically using this kind of setup: $('my_div').html(json_obj[i]) and each click does a i++. So jquery is plucking these blog posts from my computers memory, my web browsers cache, or some kind of cache in the Javascript engine? One of the things it's returning is a pretty gnarly animated gif. I just wonder if it constantly running in the background (but not visible), stealing processing cycles...etc. Or Javascript just inserting (say a flash movie) into the DOM, but before hand does nothing but take up a little memory (no processing). Anyway, I'm just curious. If someone is a guru on this, I'd love to hear your take. Thanks!!

    Read the article

  • What is the best way to use Guice and JMock together?

    - by Yishai
    I have started using Guice to do some dependency injection on a project, primarily because I need to inject mocks (using JMock currently) a layer away from the unit test, which makes manual injection very awkward. My question is what is the best approach for introducing a mock? What I currently have is to make a new module in the unit test that satisfies the dependencies and bind them with a provider that looks like this: public class JMockProvider<T> implements Provider<T> { private T mock; public JMockProvider(T mock) { this.mock = mock; } public T get() { return mock; } } Passing the mock in the constructor, so a JMock setup might look like this: final CommunicationQueue queue = context.mock(CommunicationQueue.class); final TransactionRollBack trans = context.mock(TransactionRollBack.class); Injector injector = Guice.createInjector(new AbstractModule() { @Override protected void configure() { bind(CommunicationQueue.class).toProvider(new JMockProvider<QuickBooksCommunicationQueue>(queue)); bind(TransactionRollBack.class).toProvider(new JMockProvider<TransactionRollBack>(trans)); } }); context.checking(new Expectations() {{ oneOf(queue).retrieve(with(any(int.class))); will(returnValue(null)); never(trans); }}); injector.getInstance(RunResponse.class).processResponseImpl(-1); Is there a better way? I know that AtUnit attempts to address this problem, although I'm missing how it auto-magically injects a mock that was created locally like the above, but I'm looking for either a compelling reason why AtUnit is the right answer here (other than its ability to change DI and mocking frameworks around without changing tests) or if there is a better solution to doing it by hand.

    Read the article

  • Problems compiling libjingle/gtk+-2.0 for Mac OS X

    - by mindthief
    Hi All, I'm trying to compile libjingle on Mac OSX Snow Leopard. The INSTALL file said to './configure', 'make' and 'make install', as usual. But make fails for me. Initially it gave some messages indicating that I didn't have pkg-config installed (I guess OSX doesn't come with it installed?), so I downloaded pkg-config from http://pkgconfig.freedesktop.org/releases/ Now I get this message: Package gtk+-2.0 was not found in the pkg-config search path. Perhaps you should add the directory containing `gtk+-2.0.pc' to the PKG_CONFIG_PATH environment variable No package 'gtk+-2.0' found I tried to install gtk by using the script at SourceForge: http://sourceforge.net/projects/gtk-osx/ (this is the website pointed to by the gtk website) Running the script didn't really seem to do anything, here is the output: $./gtk-osx-build-setup.sh Checking out jhbuild (2.27.3) from git... From git://git.gnome.org/jhbuild * tag 2.27.3 -> FETCH_HEAD Installing jhbuild... Installing jhbuild configuration... Installing gtk-osx moduleset files... Done. $ And I still get that error message about "Package gtk+-2.0 not found" while make-ing libjingle. Help will be appreciated, thanks!

    Read the article

  • Java appending XML data

    - by Travis
    I've already read through a few of the answers on this site but none of them worked for me. I have an XML file like this: <root> <character> <name>Volstvok</name> <charID>(omitted)</charID> <userID>(omitted)</userID> <apiKey>(omitted)</apiKey> </character> </root> I need to add another <character> somehow. I'm trying this but it does not work: public void addCharacter(String name, int id, int userID, String apiKey){ Element newCharacter = doc.createElement("character"); Element newName = doc.createElement("name"); newName.setTextContent(name); Element newID = doc.createElement("charID"); newID.setTextContent(Integer.toString(id)); Element newUserID = doc.createElement("userID"); newUserID.setTextContent(Integer.toString(userID)); Element newApiKey = doc.createElement("apiKey"); newApiKey.setTextContent(apiKey); //Setup and write newCharacter.appendChild(newName); newCharacter.appendChild(newID); newCharacter.appendChild(newUserID); newCharacter.appendChild(newApiKey); doc.getDocumentElement().appendChild(newCharacter); }

    Read the article

  • Redirect parent of a page from a cfwindow

    - by Ryan French
    Hi All, I have a page with cfwindow that require the user to be logged in to view the content on the page. The problem at the moment is if the user logs into the site, then does nothing and the session times out, I have no way that I can think of to redirect the parent of the window to the login screen when the user opens it. So far I have tried using cflocation but that has no way of specifying the container that should be redirected (i.e. the page in the window is being redirected but not the windows parent). I have also thought about using a hidden input with a value based on the session which is then check with Body onLoad event but currently this doesnt work with how the pages have been setup. The last option I have is to check the session variable on loading the window and then closing it if the user is not logged in, which will cause the parent to refresh and redirect to login anyway. However I cant find a way to close a cfwindow without using javascript. Thanks for any help you can give.

    Read the article

  • CATransaction: Layer Changes But Does Not Animate

    - by macinjosh
    I'm trying to animate part of UI in an iPad app when the user taps a button. I have this code in my action method. It works in the sense that the UI changes how I expect but it does not animate the changes. It simply immediately changes. I must be missing something: - (IBAction)someAction:(id)sender { UIViewController *aViewController = <# Get an existing UIViewController #>; UIView *viewToAnimate = aViewController.view; CALayer *layerToAnimate = viewToAnimate.layer; [CATransaction begin]; [CATransaction setAnimationDuration:1.0f]; CATransform3D rotateTransform = CATransform3DMakeRotation(0.3, 0, 0, 1); CATransform3D scaleTransform = CATransform3DMakeScale(0.10, 0.10, 0.10); CATransform3D positionTransform = CATransform3DMakeTranslation(24, 423, 0); CATransform3D combinedTransform = CATransform3DConcat(rotateTransform, scaleTransform); combinedTransform = CATransform3DConcat(combinedTransform, positionTransform); layerToAnimate.transform = combinedTransform; [CATransaction commit]; // rest of method... } I've tried simplifying the animation to just change the opacity (for example) and it still will not animate. The opacity just changes instantly. That leads me to believe something is not setup properly. Any clues would be helpful!

    Read the article

  • How do I create a dynamic data transfer object dynamically from ADO.net model

    - by Richard
    I have a pretty simple database with 5 tables, PK's and relationships setup, etc. I also have an ASP.net MVC3 project I'm using to create simple web services to feed JSON/XML to a mobile app using post/get. To access my data I'm using an ADO.net entity model class to handle generation of the entities, etc. Due to issues with serialization/circular references created by the auto-generated relations from ADO.net entity model, I've been forced to create "Data transfer objects" to strip out the relations and data that doesn't need to be transferred. Question 1: is there an easier way to create DTOs using the entity framework itself? IE, specify only the entity properties I want to convert to Jsonresults? I don't wish to use any 3rd party frameworks if I can help it. Question 2: A side question for Entity Framework, say I create an ADO.net entity model in one project within a solution. Because that model relies on the connection to the database specified in project A, can project B somehow use that model with a similar connection? Both projects are in the same solution. Thanks!

    Read the article

  • How to include external classes in a GAE deployment?

    - by kodra
    I am using the Google plug-in for Eclipse and have the following problem: The project consists of a GWT based GUI talking to a server running on GAE and using JPA. Additionally there is a project to migrate the legacy data to the new datastore. Since these both project use common data model, I have extracted a set of interfaces and enums into a separate project and set the other two projects dependencies on it. The Java App project seems to work, but the GWT/GAE only works if I manually copy the classes into the WEB-INF/classes directory. Obviously this is only working when using the housted mode. Anybody knows how to configure such a multi project setup in Eclipse? Also, I am not sure if the multi project layout is the best solution. The set of common model objects is used in all 3 areas: user client (GWT project compiling standard folders client and shared) server side (providing services for GWT-RPC, uploading and different feeds) migration application (posting the legacy data to the upload servlet) What are the architectural options to keep the amount of duplicated classes on minimum?

    Read the article

  • Unity Framework constructor parameters in MVC

    - by ubersteve
    I have an ASP.NET MVC3 site that I want to be able to use different types of email service, depending on how busy the site is. Consider the following: public interface IEmailService { void SendEmail(MailMessage mailMessage); } public class LocalEmailService : IEmailService { public LocalEmailService() { // no setup required } public void SendEmail(MailMessage mailMessage) { // send email via local smtp server, write it to a text file, whatever } } public class BetterEmailService : IEmailService { public BetterEmailService (string smtpServer, string portNumber, string username, string password) { // initialize the object with the parameters } public void SendEmail(MailMessage mailMessage) { //actually send the email } } Whilst the site is in development, all of my controllers will send emails via the LocalEmailService; when the site is in production, they will use the BetterEmailService. My question is twofold: 1) How exactly do I pass the BetterEmailService constructor parameters? Is it something like this (from ~/Bootstrapper.cs): private static IUnityContainer BuildUnityContainer() { var container = new UnityContainer(); container.RegisterType<IEmailService, BetterEmailService>("server name", "port", "username", "password"); return container; } 2) Is there a better way of doing that - i.e. putting those keys in the web.config or another configuration file so that the site would not need to be recompiled to switch which email service it was using? Many thanks!

    Read the article

  • Mysql Database Question about Large Columns

    - by murat
    Hi, I have a table that has 100.000 rows, and soon it will be doubled. The size of the database is currently 5 gb and most of them goes to one particular column, which is a text column for PDF files. We expect to have 20-30 GB or maybe 50 gb database after couple of month and this system will be used frequently. I have couple of questions regarding with this setup 1-) We are using innodb on every table, including users table etc. Is it better to use myisam on this table, where we store text version of the PDF files? (from memory usage /performance perspective) 2-) We use Sphinx for searching, however the data must be retrieved for highlighting. Highlighting is done via sphinx API but still we need to retrieve 10 rows in order to send it to Sphinx again. This 10 rows may allocate 50 mb memory, which is quite large. So I am planning to split these PDF files into chunks of 5 pages in the database, so these 100.000 rows will be around 3-4 million rows and couple of month later, instead of having 300.000-350.000 rows, we'll have 10 million rows to store text version of these PDF files. However, we will retrieve less pages, so again instead of retrieving 400 pages to send Sphinx for highlighting, we can retrieve 5 pages and it will have a big impact on the performance. Currently, when we search a term and retrieve PDF files that have more than 100 pages, the execution time is 0.3-0.35 seconds, however if we retrieve PDF files that have less than 5 pages, the execution time reduces to 0.06 seconds, and it also uses less memory. Do you think, this is a good trade-off? We will have million of rows instead of having 100k-200k rows but it will save memory and improve the performance. Is it a good approach to solve this problem and do you have any ideas how to overcome this problem? The text version of the data is used only for indexing and highlighting. So, we are very flexible. Thanks,

    Read the article

  • How to correctly waitFor() a saveScreenShot() end of execution.

    - by Alain
    Here is my full first working test: var expect = require('chai').expect; var assert = require('assert'); var webdriverjs = require('webdriverjs'); var client = {}; var webdriverOptions = { desiredCapabilities: { browserName: 'phantomjs' }, logLevel: 'verbose' }; describe('Test mysite', function(){ before(function() { client = webdriverjs.remote( webdriverOptions ); client.init(); }); var selector = "#mybodybody"; it('should see the correct title', function(done) { client.url('http://localhost/mysite/') .getTitle( function(err, title){ expect(err).to.be.null; assert.strictEqual(title, 'My title page' ); }) .waitFor( selector, 2000, function(){ client.saveScreenshot( "./ExtractScreen.png" ); }) .waitFor( selector, 7000, function(){ }) .call(done); }); after(function(done) { client.end(done); }); }); Ok, it does not do much, but after working many hours to get the environement correctly setup, it passed. Now, the only way I got it working is by playing with the waitFor() method and adjust the delays. It works, but I still do not understand how to surely wait for a png file to be saved on disk. As I will deal with tests orders, I will eventually get hung up from the test script before securely save the file. Now, How can I improve this screen save sequence and avoid loosing my screenshot ? Thanks.

    Read the article

  • How to pass dynamic parameters to .pde file

    - by Kalpana
    class Shape contains two methods drawCircle() and drawTriangle(). Each function takes different set of arguments. At present, I invoke this by calling the pde file directly. How to pass these arguments from a HTML file directly if I have to control the arguments being passed to the draw function? 1) Example.html has (current version) <script src="processing-1.0.0.min.js"></script> <canvas data-processing-sources="example.pde"></canvas> 2) Example.pde has class Shape { void drawCircle(intx, int y, int radius) { ellipse(x, y, radius, radius); } void drawTriangle(int x1, int y1, int x2, int y2, int x3, int y3) { rect(x1, y1, x2, y2, x3, y3); } } Shape shape = new Shape(); shape.drawCircle(10, 40, 70); I am looking to do something like this in my HTML file, so that I can move all the functions into a separate file and call them with different arguments to draw different shapes (much similar to how you would do it in Java) A.html <script> Shape shape = new Shape(); shape.drawCircle(10, 10, 3); </script> B.html <script> Shape shape = new Shape(); shape.drawTriangle(30, 75, 58, 20, 86, 75); </script> 2) Iam using Example2.pde has void setup() { size(200,200); background(125); fill(255); } void rectangle(int x1, int y1, int x2, int y2) { rect(x1, y1, x2, y2); } My Example2.html has var processingInstance; processingInstance.rectangle(30, 20, 55, 55); but this is not working. How to pass these parameters dynamically from html.

    Read the article

  • Action Mailer: How do I render dynamic data in an email body that is stored in the database?

    - by Brandon Toone
    I have Action Mailer setup to render an email using the body attribute of my Email model (in the database). I want to be able to use erb in the body but I can't figure out how to get it to render in the sent email message. I'm able to get the body as a string with this code # models/user_mailer.rb def custom_email(user, email_id) email = Email.find(email_id) recipients user.email from "Mail It Example <[email protected]>" subject "Hello From Mail It" sent_on Time.now # pulls the email body and passes a string to the template views/user_mailer/customer_email.text.html.erb body :msg => email.body end I came across this article http://rails-nutshell.labs.oreilly.com/ch05.html which says I can use render but I'm only able to get render :text to work and not render :inline # models/user_mailer.rb def custom_email(user, email_id) email = Email.find(email_id) recipients user.email from "Mail It Example <[email protected]>" subject "Hello From Mail It" sent_on Time.now # body :msg => email.body body :msg => (render :text => "Thanks for your order") # renders text and passes as a variable to the template # body :msg => (render :inline => "We shipped <%= Time.now %>") # throws a NoMethodError end

    Read the article

  • Website running JavaScript setInterval starts to fail after ~1day

    - by Martin Clemens Bloch
    I wish I could be more specific here, but unfortunately this might be hard. I basically hope this is some "well"-known timeout or setup issue. We have a website running an (JS/html - ASP.net project) website overview on a screen at a factory. This screen has no keyboard so it should keep refreshing the page forever - years perhaps (though 1 week might be okay). (It is used by factory workers to see incoming transports etc.) This all works perfectly; the site continuously updates itself and gets the new correct data. Then, sometimes, in the morning this "overview" screen has no data and the workers have to manually refresh the site using the simple refresh button or F5 - which fixes everything. I have tried a few things trying to reproduce the error myself including: Cutting the internet connection and MANY other ways of making it timeout (breakpoints, stopping services etc.). Setting the refresh time of setInterval to 100ms and letting the site run 3-5 minutes. (normal timer is 1 minute) setInterval SHOULD run forever according to the internet searching I have done. Checked that "JavaScript frequency" has not been turned down in power saving settings. No matter what; the site resumes correct function WITHOUT a refresh as soon as I plug in the internet cable or whatever again - I can't reproduce the error. The website is dependent on a backend WCF service and project integration, but since the workers are fixing this with a simple refresh I am assuming this has not crashed. EDIT: The browser I tried to reproduce the error in was IE/win7. I will ask about the factory tomorrow, but I am guessing IE/win? also. Is setInterval in fact really infinite or is there something else wrong here? All help much appreciated. 0.5 bitcoin reward for solving answer ;)

    Read the article

  • Forms Authentication & Virtual Directory

    - by benclaytonfranklin
    Hi, We're having trouble getting Forms Authentication to work with a virtual directory in IIS. We have a main site, and then a microsite setup within a virtual directory. This mircosite has its own admin system within an "Admin" folder, which has authentication on it but currently it is not kicking in and the admin section is browsable by anyone. The web.config with the admin folder has the following: <?xml version="1.0"?> <configuration> <appSettings/> <connectionStrings/> <system.web> <authorization> <deny users="?"/> </authorization> <customErrors mode="RemoteOnly" defaultRedirect="~/Admin/Error.aspx"/> </system.web> </configuration> Could anyone give me any clues as to why this might not be working? Cheers!

    Read the article

< Previous Page | 571 572 573 574 575 576 577 578 579 580 581 582  | Next Page >