Search Results

Search found 20484 results on 820 pages for 'small projects'.

Page 663/820 | < Previous Page | 659 660 661 662 663 664 665 666 667 668 669 670  | Next Page >

  • How can I setup ANT with Subversion and ColdFusion Builder (eclipse) to check out a local build to w

    - by Smooth Operator
    I am not sure if there's an answer for this already -- couldn't find one for this (hopefully common) setup: I recently converted one of my ColdFusion projects to deploy via ANT. I have a local ant script that instructs a remote server to check out the code, and run the application's specific build file, remotely on the server. I have a few endpoints: Live - production (on the production server) Staging - on the production server, different datasource, etc. dev - on the local box. What I have run into it seems is a simple and common problem. I now need ANT to create any build, even locally. Fine, created a local endpoint and it configures for my box. Issue? How do I get it to show up as a project (automatically if possible) in Eclipse/ColdFusion builder. What I envision is instead of checking out a branch via the subversion plugin in CFBuilder/Eclipse, I now use ANT to do that for me. Since I use ColdFusion Builder (Eclipse + Adobe's plugin), I have all of eclipse's tools and plugins available to solve the problem of : how can I best call ANT from within Eclipse/ColdFusion Builder, to setup the local build as a project that I can develop and work on? I think when I check the code back in from the local box, I'd have to be sure not to check in any files with local config paths, etc. I hope this is a detailed and clear enough explanation, if not, please ask. Thanks in advance!

    Read the article

  • How to minimize total cost of shortest path tree

    - by Michael
    I have a directed acyclic graph with positive edge-weights. It has a single source and a set of targets (vertices furthest from the source). I find the shortest paths from the source to each target. Some of these paths overlap. What I want is a shortest path tree which minimizes the total sum of weights over all edges. For example, consider two of the targets. Given all edge weights equal, if they share a single shortest path for most of their length, then that is preferable to two mostly non-overlapping shortest paths (fewer edges in the tree equals lower overall cost). Another example: two paths are non-overlapping for a small part of their length, with high cost for the non-overlapping paths, but low cost for the long shared path (low combined cost). On the other hand, two paths are non-overlapping for most of their length, with low costs for the non-overlapping paths, but high cost for the short shared path (also, low combined cost). There are many combinations. I want to find solutions with the lowest overall cost, given all the shortest paths from source to target. Does this ring any bells with anyone? Can anyone point me to relevant algorithms or analogous applications? Cheers!

    Read the article

  • How can I reuse .NET resources in multiple executables?

    - by Brandon
    I have an existing application that I'm supposed to take and create a "mini" version of. Both are localized apps and we would like to reuse the resources in the main application as is. So, here's the basic structure of my apps: MainApplication.csproj /Properties/Resources.resx /MainUserControl.xaml (uses strings in Properties/Resources.resx) /MainUserControl.xaml.cs MiniApplication.csproj link to MainApplication/Properties/Resources.resx link to MainApplication/MainUserControl.xaml link to MainApplication/MainUserControl.xaml.cs MiniApplication.xaml (tries to use MainUserControl from MainApplication link) So, in summary, I've added the needed resources and user control from the main application as links in the mini application. However, when I run the mini application, I get the following exception. I'm guessing it's having trouble with the different namespaces, but how do I fix? Could not find any resources appropriate for the specified culture or the neutral culture. Make sure \"MainApplication.Properties.Resources.resources\" was correctly embedded or linked into assembly \"MiniApplication\" at compile time, or that all the satellite assemblies required are loadable and fully signed. FYI, I know I could put the user control in a user control library but the problem is that the mini application needs to have a small footprint. So, we only want to include what we need.

    Read the article

  • TimeoutException when WCF Host and Client are in the same process

    - by Pharao2k
    I've ran into a really weird problem. I am building a heavily distributed application where each app instance can either be a Host and/or Client to a WCF-Service (very p2p-like). Everything works fine, as long as the Client and the targeted Host (By which I mean the app, not the Host, since currently everything runs on a single computer (so no Firewall problems etc.)) are NOT the same. IF they are the same, then the app hangs for exactly 1 Minute and then throws a TimeoutException. WCF-Logging did not produce anything helpful. Here is a small app which demonstrates the Problem: public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); } private void button1_Click(object sender, RoutedEventArgs e) { var binding = new NetTcpBinding(); var baseAddress = new Uri(@"net.tcp://localhost:4000/Test"); ServiceHost host = new ServiceHost(typeof(TestService), baseAddress); host.AddServiceEndpoint(typeof(ITestService), binding, baseAddress); var debug = host.Description.Behaviors.Find<ServiceDebugBehavior>(); if (debug == null) host.Description.Behaviors.Add(new ServiceDebugBehavior { IncludeExceptionDetailInFaults = true }); else debug.IncludeExceptionDetailInFaults = true; host.Open(); var clientBinding = new NetTcpBinding(); var testProxy = new TestProxy(clientBinding, new EndpointAddress(baseAddress)); testProxy.Test(); } } [ServiceContract] public interface ITestService { [OperationContract] void Test(); } public class TestService : ITestService { public void Test() { MessageBox.Show("foo"); } } public class TestProxy : ClientBase<ITestService>, ITestService { public TestProxy(NetTcpBinding binding, EndpointAddress remoteAddress) : base(binding, remoteAddress) { } public void Test() { Channel.Test(); } } What am I doing wrong? Regards, Pharao2k

    Read the article

  • Dependency Injection Question - ASP.NET

    - by Paul
    I'm starting a web application that contains the following projects: Booking.Web Booking.Services Booking.DataObjects Booking.Data I'm using the repository pattern in my data project only. All services will be the same, no matter what happens. However, if a customer wants to use Access, it will use a different data repository than if the customer wants to use SQL Server. I have StructureMap, and want to be able to do the following: Web project is unaffected. It's a web forms application that will only know about the services project and the dataobjects project. When a service is called, it will use StructureMap (by looking up the bootstrapper.cs file) to see which data repository to use. An example of a services class is the error logging class: public class ErrorLog : IErrorLog { ILogging logger; public ErrorLog() { } public ErrorLog(ILogging logger) { this.logger = logger; } public void AddToLog(string errorMessage) { try { AddToDatabaseLog(errorMessage); } catch (Exception ex) { AddToFileLog(ex.Message); } finally { AddToFileLog(errorMessage); } } private void AddToDatabaseLog(string errorMessage) { ErrorObject error = new ErrorObject { ErrorDateTime = DateTime.Now, ErrorMessage = errorMessage }; logger.Insert(error); } private void AddToFileLog(string errorMessage) { // TODO: Take this value from the web.config instead of hard coding it TextWriter writer = new StreamWriter(@"E:\Work\Booking\Booking\Booking.Web\Logs\ErrorLog.txt", true); writer.WriteLine(DateTime.Now.ToString() + " ---------- " + errorMessage); writer.Close(); } } I want to be able to call this service from my web project, without defining which repository to use for the data access. My boostrapper.cs file in the services project is defined as: public class Bootstrapper { public static void ConfigureStructureMap() { ObjectFactory.Initialize(x => { x.AddRegistry(new ServiceRegistry()); } ); } public class ServiceRegistry : Registry { protected override void configure() { ForRequestedType<IErrorLog>().TheDefaultIsConcreteType<Booking.Services.Logging.ErrorLog>(); ForRequestedType<ILogging>().TheDefaultIsConcreteType<SqlServerLoggingProvider>(); } } } What else do I need to get this to work? When I defined a test, the ILogger object was null. Thanks,

    Read the article

  • Facebook app getting "Warning: fopen......." - Please help!

    - by Poni
    Hello! I'm trying to establish a Facebook application. I'm getting the following PHP notice randomly: Warning: fopen(http://api.facebook.com/restserver.php? … &v=1.0) [function.fopen]: failed to open stream: HTTP request failed! HTTP/1.1 400 Bad Request in D:\projects........\facebookapi_php5_restlib.php on line 3391 As said, it happens randomly. My app configuration: IFrame Page host configuration: Windows Vista, Apache 2.2.11, PHP 5.1 No support for cURL is possible at the moment. My aim: Get the user's first name & pic_square along with two other preferences. If these two preferences are not set then I'll set them, after doing some internal job. I understand that the REST API is old. Which interface should I use then? Saw FQL, which seem to be fine, but I see no corresponding getUserPreference/setUserPreference interface. Where is it? How do I combine it with the "first name & pic_square" info that I need in one query? How do I read the results of multi-query query? Also, if I understand correctly, the FQL is also executed with the fopen() function, and as we see here it might fail. That means something went wrong, and then I want to handle it. Any suggestions how to do so efficiently? Less important than the above question though. Thank you all.

    Read the article

  • Need some advice on Core Data modeling strategy

    - by Andy
    I'm working on an iPhone app and need a little advice on modeling the Core Data schema. My idea is a utility that allows the user to speed-dial their contacts using user-created rules based on the time of day. In other words, I would tell the app that my wife is commuting from 6am to 7am, at work from 7am to 4pm, commuting from 4pm to 5pm, and home from 5pm to 6am, Monday through Friday. Then, when I tap her name in my app, it would select the number to dial based on the current day and time. I have the user interface nearly complete (thanks in no small part to help I've received here), but now I've got some questions regarding the persistent store. The user can select start- and stop-times in 5-minute increments. This means there are 2,016 possible "time slots" in week (7 days * 24 hours * 12 5-minute intervals per hour). I see a few options for setting this up. Option #1: One array of time slots, with 2,016 entries. Each entry would be a dictionary containing a contact identifier and an associated phone number to dial. I think this means I'd need a "Contact" entity to store the contact information, and a "TimeSlot" entity for each of the 2,016 possible time slots. Option #2: Each Contact has its own array of time slots, each with 2,016 entries. Each array entry would simply be a string indicating which phone number to dial. Option #3: Each Contact has a dictionary of time slots. An entry would only be added to the dictionary for time slots with an active rule. If a search for, say, time slot 1,299 (Friday 12:15pm) didn't find a key @"1299" in the dictionary, then a default number would be dialed instead. I'm not sure any of these is the "right" way or the "best" way. I'm not even sure I need to use Core Data to manage it; maybe just saving arrays would be simpler. Any input you can offer would be appreciated.

    Read the article

  • Open source gravatar-like implementations?

    - by Tauren
    I'm already using gravatar icons for the users of my web service. However, I'm finding several problems with this approach: Only a small percentage of the users take the time to set up a gravatar profile. My users are not tech-savvy, but would be likely to add a dedicated photo to my site. Users of my service are encouraged to use images that depict them in proper uniform for the industry my service relates to. They wouldn't want that same picture to be used for personal purposes throughout the internet. They would not take the time or effort to manage a separate email address and gravatar account just to have an "in-uniform" profile photo for my service. Before I implement my own profile image feature, I was wondering if there are any open-source solutions that I could leverage with similar features to gravatar. Specifically: The ability to display any size thumbnail (up to 512px would be fine) Takes care of caching different sized thumbnails Has support for something like identicons, preferably pluggable with different style algorithms (monsters, etc.), even better if I can customize these Ability to fall-back to gravatar if no photo found Does anything like this exist? I haven't found it yet if it does.

    Read the article

  • a facebook app question

    - by Robert
    I have written a small app and put it on facebook. I got an application ID and secret. Then I wrote the following script to access my app (just as told on the facebook page). <?php require './src/facebook.php'; $facebook = new Facebook(array( 'appId' => 'xxxx', 'secret' => 'xxxx', 'cookie' => true, // enable optional cookie support )); try { $me = $facebook->api('/me'); } catch (FacebookApiException $e) { error_log($e); } if ($facebook->getSession()) { echo '<a href="' . $facebook->getLogoutUrl() . '">Logout</a>'; } else { echo '<a href="' . $facebook->getLoginUrl() . '">Login</a>'; } ?> Then I started running this script. It prompted me with the login link, then took me to the facebook login page. However, after I enter my facebook login details, I get this error page: Error. API Error Code: 100 API Error Description: Invalid parameter Error Message: next is not owned by the application. Could anyone help me a little bit please, I am really confused here about what's going on.

    Read the article

  • Clarification needed: How does .NET runtime resolve assembly references from parent folder?

    - by aoven
    I have the following output structure of executables in my solution: %ProgramFiles% | +-[MyAppName] | +-[Client] | | | +-(EXE & several DLL assemblies) | +-[Common] | | | +-[Schema Assemblies] | | | | | +-(several DLL assemblies) | | | +-(several DLL assemblies) | +-[Server] | +-(EXE & several DLL assemblies) Each project in solution references different DLL assemblies, some of which are outputs from other projects in solution, and others are plain 3rd-party assemblies. For example, [Client] EXE might reference an assembly in [Common], which is in a different directory branch. All references have "Copy Local" set to false, to mirror the layout of the files in the final installed application. Now, if I take a look at reference properties in the Visual Studio IDE, I see that "Path" of every reference is absolute and that it corresponds to the actual output location of the assembly. That's understandable and correct. As expected, solution compiles and runs just fine. What I don't understand is, why everything seems to work even when I close the IDE, rename the [MyAppName] directory and run the [Client] EXE manually? How does the runtime find the assemblies if the reference paths aren't the same as they were at the time of linking? To be clear - this is actually exactly what I'm after: a semi-dispersed set of application files that run fine regardless of where the [MyAppName] directory is located or even what it's named. I'd just like to know, how and why this works without any specific path resolution on my part. I've read the answers to this similar question, but I still don't get it. Help much appreciated!

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • Changing CSS Rules using JavaScript or jQuery

    - by Praveen Kumar
    Initial Research I am aware of using .css() to get and set the CSS rules of a particular element. I have seen a website with this CSS: body, table td, select { font-family: Arial Unicode MS, Arial, sans-serif; font-size: small; } I never liked Arial Unicode as a font. Well, that was my personal feel. So, I would use Chrome's Style Inspector to edit the Arial Unicode MS to Segoe UI or something which I like. Is there anyway, other than using the following to achieve the same? Case I $("body, table td, select").css("font-family", "Segoe UI"); Recursive, performance intensive. Doesn't work when things are loaded on the fly. Case II $('<style>body, table td, select {font-famnily: "Segoe UI";}</style>') .appendTo("head"); Any other better method than this? Creates a lot of <style> tags!

    Read the article

  • Is ther any tool to extract keywords from a English Text or Article In Java?

    - by user555581
    Dear Experts, I am trying to identify the type of the web site(In English) by machine. I try to download the homepage of the web iste, download html page, parsing and get the content of the web page. Such as here are some context from CNN.com. I try to get the keywords of the web page, mapping with my database. If the keywords include like news, breaking news. The web site will go to the news web sites. If there exist some words like healthy, medical, it will be the medical web site. There exist some tools can do the text segmentation, but it is not easy to find a tool do the semantic, such as online shopping, it is a keywords, should not spilt two words. The combination will be helpful information. But "oneline", "shopping" will be less useful as it may exist online travel... • Newark, JFK airports reopen • 1 runway reopens at LaGuardia Airport • Over 4,155 flights were cancelled Monday • FULL STORY * LaGuardia Airport snowplows busy Video * Are you stranded? | Airport delays * Safety tips for winter weather * Frosty fun Video | Small dog, deep snow Latest news * Easter eggs used to smuggle cocaine * Salmonella forces cilantro, parsley recall * Obama's surprising verdict on Vick * Blue Note baritone Bernie Wilson dead * Busch aide to 911: She's not waking up * Girl, 15, last seen working at store in '90 * Teena Marie's death shocks fans * Terror network 'dismantled' in Morocco * Saudis: 'Militant' had al Qaeda ties * Ticker: Gov. blasts Obama 'birthers' * Game show goof is 800K mistakeVideo * Chopper saves calf on frozen pondVideo * Pickpocketing becomes hands-freeVideo * Chilean miners going to Disney World * Who's the most intriguing of 2010? * Natalie Portman is pregnant, engaged * 'Convert all gifts from aunt' CNNMoney * Who controls the thermostat at home? * This Just In: CNN's news blog

    Read the article

  • Unit tests and Test Runner problems under .Net 4.0

    - by Brett Rigby
    Hi there, We're trying to migrate a .Net 3.5 solution into .Net 4.0, but are experiencing complications with the testing frameworks that can operate using an assembly that is built using version 4.0 of the .Net Framework. Previously, we used NUnit 2.4.3.0 and NCover 1.5.8.0 within our NAnt scripts, but NUnit 2.4.3.0 doesn't like .Net 4.0 projects. So, we upgraded to a newer version of the NUnit framework within the test project itself, but then found that NCover 1.5.8.0 doesn't support this version of NUnit. We get errors in the code saying words to the effect of the assembly was built using a newer version of the .Net Framework than is currently in use, as it's using .Net Framework 2.0 to run the tools. We then tried using Gallio's Icarus test runner GUI, but found that this and MbUnit only support up to version 3.5 of the .Net Frameword and the result is "the tests will be ignored". In terms of the coverage side of things (for reporting into CruiseControl.net), we have found that PartCover is a good candidate for substituting-out NCover, (as the newer version of NCover is quite dear, and PartCover is free), but this is a few steps down the line yet, as we can't get the test runners to work first!! Can any shed any light on a testnig framework that will run under .Net 4.0 in the same way as I've described above? If not, I fear we may have to revert back to using .Net 3.5 until the manufacturers of the tooling that we're currently using have a chance to upgrade to .Net 4.0. Thanks.

    Read the article

  • SubSonic 3.0 Simple Repository Adding a DateTime Property To An Object

    - by Blounty
    I am trying out SubSonic to see if it is viable to use on production projects. I seem to have stumbled upon an issue whith regards to updating the database with default values (String and DateTime) when a new column is created. If a new property of DateTime or String is added to an object. public class Bug { public int BugId { get; set; } public string Title { get; set; } public string Overview { get; set; } public DateTime TrackedDate { get; set; } public DateTime RemovedDate { get; set; } } When the code to add that type of object to the database is run var repository = new SimpleRepository(SimpleRepositoryOptions.RunMigrations); repository.Add(new Bug() { Title = "A Bug", Overview = "An Overview", TrackedDate = DateTime.Now }); it creates the following sql: UPDATE Bugs SET RemovedDate=''01/01/1900 00:00:00'' For some reason it is adding double 2 single quotes to each end of the string or DateTime. This is causing the following error: System.Data.SqlClient.SqlException - Incorrect syntax near '01' I am connecting to SQL Server 2005 Any help would be appreicated as apart from this issue i am finding SubSonic to be a great product. I have created a screen cast of my error here:

    Read the article

  • How to recursive rake? -- or suitable alternatives

    - by TerryP
    I want my projects top level Rakefile to build things using rakefiles deeper in the tree; i.e. the top level rakefile says how to build the project (big picture) and the lower level ones build a specific module (local picture). There is of course a shared set of configuration for the minute details of doing that whenever it can be shared between tasks: so it is mostly about keeping the descriptions of what needs building, as close to the sources being built. E.g. /Source/Module/code.foo and cie should be built using the instructions in /Source/Module/Rakefile; and /Rakefile understands the dependencies between modules. I don't care if it uses multiple rake processes (ala recursive make), or just creates separate build environments. Either way it should be self-containable enough to be processed by a queue: so that non-dependent modules could be built simultaneously. The problem is, how the heck do you actually do something like that with Rake!? I haven't been able to find anything meaningful on the Internet, nor in the documentation. I tried creating a new Rake::Application object and setting it up, but whatever methods I try invoking, only exceptions or "Don't know how to build task ':default'" errors get thrown. (Yes, all rakefiles have a :default). Obviously one could just execute 'rake' in a sub directory for a :modulename task, but that would ditch the options given to the top level; e.g. think of $(MAKE) and $(MAKEFLAGS). Anyone have a clue on how to properly do something like a recursive rake?

    Read the article

  • Finding a 3rd party QWidget with injected code & QWidget::find(hwnd)

    - by David Menard
    Hey, I have a Qt Dll wich I inject into a third-party Application using windows detours library: if(!DetourCreateProcessWithDll( Path, NULL, NULL, NULL, TRUE, CREATE_DEFAULT_ERROR_MODE | CREATE_SUSPENDED, NULL, NULL, &si, &pi, "C:\\Program Files\\Microsoft Research\\Detours Express 2.1\\bin\\detoured.dll", "C:\\Users\\Dave\\Documents\\Visual Studio 2008\\Projects\\XOR\\Debug\\XOR.dll", NULL)) and then I set a system-wide hook to intercept window creation: HHOOK h_hook = ::SetWindowsHookEx(WH_CBT, (HOOKPROC)CBTProc, Status::getInstance()->getXORInstance(), 0); Where XOR is my programs name, and Status::getInstance() is a Singleton where I keep globals. In my CBTProc callback, I want to intercept all windows that are QWidgets: HWND hwnd= FindWindow(L"QWidget", NULL); which works well, since I get a corresponding HWND (I checked with Spy++) Then, I want to get a pointer to the QWidget, so I can use its functions: QWidget* q = QWidget::find(hwnd); but here's the problem, the returned pointer is always 0. Am I not injecting my code into the process properly? Or am I not using QWidget::find() as I should? Thanks, Dave EDIT:If i change the QWidget::find() function to an exported function of my DLL, after setting the hooks (so I can set and catch a breakpoint), QWidgetPrivate::mapper is NULL.

    Read the article

  • ASP.NET MVC intermittent slow response

    - by arehman
    Problem In our production environment, system occasionally delays the page response of an ASP.NET MVC application up to 30 seconds or so, even though same page renders in 2-3 seconds most of the times. This happens randomly with any arbitrary page, and GET or POST type requests. For example, log files indicates, system took 15 seconds to complete a request for jquery script file or for other small css file it took 10 secs. Similar Problems: Random Slow Downs Production Environment: Windows Server 2008 - Standard (32-bit) - App Pool running in integrated mode. ASP.NET MVC 1.0 We have tried followings/observations: Moved the application to a stand alone web server, but, it didn't help. We didn't ever notice same issue on the server for any 'ASP.NET' application. App Pool settings are fine. No abrupt recycles/shutdowns. No cpu spikes or memory problems. No delays due to SQL queries or so. It seems as something causing delay along HTTP Pipeline or worker processor seeing the request late. Looking for other suggestions. -- Thanks

    Read the article

  • Problem with SQLite related nUnit-tests after upgrade to VS2010 and Re#5

    - by stiank81
    After converting to Visual Studio 2010 with ReSharper5 some of my unit tests started failing. More specifically this applies to all unit tests that use NHibernate with SQLite. The problem seem to be related to SQLite somehow. The unit tests that does not involve NHibernate and SQLite are still running fine. The exception is as follows: NHibernate.HibernateException : Could not create the driver from NHibernate.Driver.SQLite20Driver, NHibernate, Version=2.1.2.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4. ----> System.Reflection.TargetInvocationException : Exception has been thrown by the target of an invocation. ----> NHibernate.HibernateException : The IDbCommand and IDbConnection implementation in the assembly System.Data.SQLite could not be found. Ensure that the assembly System.Data.SQLite is located in the application directory or in the Global Assembly Cache. If the assembly is in the GAC, use <qualifyAssembly/> element in the application configuration file to specify the full name of the assembly. TearDown : System.NullReferenceException : Object reference not set to an instance of an object. The exception is the NullReferenceException on TearDown when cleaning up NHibernate objects that wasn't successfully created, but the problem seem to be related to SQLite somehow. I run my unit tests through ReSharper, but I get the same exception when running them directly through the NUnit.exe application. However, running them through the x86 variant (NUnit-x86.exe) all tests run fine. Can it be related to some mixing of 64bit and 32bit dlls? It still runs fine through VS2008 + ReSharper4.5. Note that the target framework of my projects still is .NET3.5. Anyone seen this problem before?

    Read the article

  • Compile and optimize for different target architectures

    - by Peter Smit
    Summary: I want to take advantage of compiler optimizations and processor instruction sets, but still have a portable application (running on different processors). Normally I could indeed compile 5 times and let the user choose the right one to run. My question is: how can I can automate this, so that the processor is detected at runtime and the right executable is executed without the user having to chose it? I have an application with a lot of low level math calculations. These calculations will typically run for a long time. I would like to take advantage of as much optimization as possible, preferably also of (not always supported) instruction sets. On the other hand I would like my application to be portable and easy to use (so I would not like to compile 5 different versions and let the user choose). Is there a possibility to compile 5 different versions of my code and run dynamically the most optimized version that's possible at execution time? With 5 different versions I mean with different instruction sets and different optimizations for processors. I don't care about the size of the application. At this moment I'm using gcc on Linux (my code is in C++), but I'm also interested in this for the Intel compiler and for the MinGW compiler for compilation to Windows. The executable doesn't have to be able to run on different OS'es, but ideally there would be something possible with automatically selecting 32 bit and 64 bit as well. Edit: Please give clear pointers how to do it, preferably with small code examples or links to explanations. From my point of view I need a super generic solution, which is applicable on any random C++ project I have later. Edit I assigned the bounty to ShuggyCoUk, he had a great number of pointers to look out for. I would have liked to split it between multiple answers but that is not possible. I'm not having this implemented yet, so the question is still 'open'! Please, still add and/or improve answers, even though there is no bounty to be given anymore. Thanks everybody!

    Read the article

  • Rails 2.3.2 trying to render ERB instead of HAML

    - by c00lryguy
    Rails is suddenly trying to render ERB instead of Haml and I can't figure out why. I've created new rails projects, reinstalled Haml, and reinstalled Rails. Here's exactly the steps I take when making my application (Rails 2.3.2): rails> rails test rails> cd test rails\test> haml --rails . rails\test> ruby script\generate model user email:string password:string rails\test> ruby script\generate controller users index rails\test> rake db:migrate Here's what the UsersController looks like: class UsersController < ApplicationController def index @users = User.all end end My routes: ActionController::Routing::Routes.draw do |map| map.resources :users end I now create views\users\index.html.haml: %table %th(style="text-align: left;") %h1 Users - for user in @users %tr %td= user.email %td= user.password Annnd run the server... I navigate to localhost:3000\users and I get this error message: Template is missing Missing template users/index.erb in view path app/views For some reason Rails is trying to find and render .erb files instead of .haml files. vendor\plugins\haml\init.rb exists, untouched. I've reinstalled Haml (Pretty Penny) multiple times and still get the same results. I've also tried adding config.gem 'haml' to my environment.rb but this also doesn't work. I can't figure out why suddenly rails will not render haml for me.

    Read the article

  • how to handle multiple profiles per user?

    - by Scott Willman
    I'm doing something that doesn't feel very efficient. From my code below, you can probably see that I'm trying to allow for multiple profiles of different types attached to my custom user object (Person). One of those profiles will be considered a default and should have an accessor from the Person class. Can this be done better? from django.db import models from django.contrib.auth.models import User, UserManager class Person(User): public_name = models.CharField(max_length=24, default="Mr. T") objects = UserManager() def save(self): self.set_password(self.password) super(Person, self).save() def _getDefaultProfile(self): def_teacher = self.teacher_set.filter(default=True) if def_teacher: return def_teacher[0] def_student = self.student_set.filter(default=True) if def_student: return def_student[0] def_parent = self.parent_set.filter(default=True) if def_parent: return def_parent[0] return False profile = property(_getDefaultProfile) def _getProfiles(self): # Inefficient use of QuerySet here. Tolerated because the QuerySets should be very small. profiles = [] if self.teacher_set.count(): profiles.append(list(self.teacher_set.all())) if self.student_set.count(): profiles.append(list(self.student_set.all())) if self.parent_set.count(): profiles.append(list(self.parent_set.all())) return profiles profiles = property(_getProfiles) class BaseProfile(models.Model): person = models.ForeignKey(Person) is_default = models.BooleanField(default=False) class Meta: abstract = True class Teacher(BaseProfile): user_type = models.CharField(max_length=7, default="teacher") class Student(BaseProfile): user_type = models.CharField(max_length=7, default="student") class Parent(BaseProfile): user_type = models.CharField(max_length=7, default="parent")

    Read the article

  • SQL Express under IIS 7.5

    - by fampinheiro
    I´m developing a web service that access a SQL Express database, it works very well in the Visual Studio host but when i deploy it to IIS 7.5 i get this exception. Please help me. Stack Trace: System.Data.EntityException: The underlying provider failed on Open. ---> System.Data.SqlClient.SqlException: Failed to generate a user instance of SQL Server due to failure in retrieving the user's local application data path. Please make sure the user has a local user profile on the computer. The connection will be closed. at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlInternalConnectionTds.CompleteLogin(Boolean enlistOK) at System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, Boolean ignoreSniOpenTimeout, Int64 timerExpire, SqlConnection owningObject) at System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(String host, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, Int64 timerStart) at System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) at System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open() at System.Data.EntityClient.EntityConnection.OpenStoreConnectionIf(Boolean openCondition, DbConnection storeConnectionToOpen, DbConnection originalConnection, String exceptionCode, String attemptedOperation, Boolean& closeStoreConnectionOnFailure) --- End of inner exception stack trace --- at System.Data.EntityClient.EntityConnection.OpenStoreConnectionIf(Boolean openCondition, DbConnection storeConnectionToOpen, DbConnection originalConnection, String exceptionCode, String attemptedOperation, Boolean& closeStoreConnectionOnFailure) at System.Data.EntityClient.EntityConnection.Open() at System.Data.Objects.ObjectContext.EnsureConnection() at System.Data.Objects.ObjectQuery`1.GetResults(Nullable`1 forMergeOption) at System.Data.Objects.ObjectQuery`1.System.Collections.Generic.IEnumerable<T>.GetEnumerator() at System.Linq.Enumerable.FirstOrDefault[TSource](IEnumerable`1 source) at WSCinema.CinemaService.Movie() in D:\Documents\My Dropbox\Projects\sd.v0910\trab3\code\WSCinema\CinemaService.asmx.cs:line 46

    Read the article

  • Firefox and IE - Firefox handling of <span> inside tables (html and css)

    - by jasrpg
    Hello, I am having difficulties getting a span tag to work properly inside a table. It appears as if the entire span tag is being ignored that is defined anywhere in between table tags in firefox, but in IE this shows up correctly. Maybe I am missing something, but I have created a small example css and html file that displays differently in Firefox and IE. Hopefully someone can tell me why it is behaving this way or how I can rearrange the html to resolve the issue in firefox. ---main.css--- .class1 A:link {color:#960033; text-decoration: none} .class1 A:visited {color:#960033; text-decoration: none} .class1 A:hover {color:#0000FF; font-weight: bold; text-decoration: none} .class1 A:active {color:#0000FF; font-weight: bold; text-decoration: none} .class2 A:link {color:#960033; text-decoration: none} .class2 A:visited {color:#960033; text-decoration: none} .class2 A:hover {color:#0000FF; text-decoration: none} .class2 A:active {color:#0000FF; text-decoration: none} ---test.htm--- <HTML> <HEAD> <title>Title Page</title> <style type="text/css">@import url("/css/main.css");</style> </HEAD> <span class="class1"> <BODY> <table><tr><td> ---Insert Hyperlink---<br> </td></tr> </span><span class="class2"> <tr><td> ---Insert Hyperlink---<br> </td></tr></table> </span> </BODY> </HTML>

    Read the article

  • using jQuery to load the body of another HTML document between entries

    - by justin hall
    I'm trying to use jQuery to load the body of another HTML document, which contains our AdSense banner. It should display under each blog entry when in list view. I'm able to get it to load text, even images, but not the banner; the banner is a small script. Here is the website we are working with: http://neverknowtech.com/data/ (that's a test page, and the 'entry' displayed there contains the intended body content) As you can see here the body code does include an ad, and the word 'Ad'. The word 'Ad' is shown correctly below the post (as it is in list view) but the script for the Google Ad doesn't seem to make it. Here is what we have in the footer currently (replace [ with < and ] with ): [script type="text/javascript"] var classSelector = ":nth-child(1n)"; if ($(".list-journal-entry-wrapper .journal-entry-wrapper").length ] 0) { $('.list-journal-entry-wrapper .journal-entry-wrapper' + classSelector).after('[div class="journal-list-ad-insert"][/div]'); $('.list-journal-entry-wrapper .journal-list-ad-insert').load("/data/journal-list-ad-insert.html .body"); } [/script] note: the nth-child is there for when they decide how frequently to place the ads.

    Read the article

< Previous Page | 659 660 661 662 663 664 665 666 667 668 669 670  | Next Page >