Search Results

Search found 10755 results on 431 pages for 'cluster shared volume'.

Page 381/431 | < Previous Page | 377 378 379 380 381 382 383 384 385 386 387 388  | Next Page >

  • Rails CSS not Loading using Heroku

    - by eWizardII
    I have the following site set up here on Heroku - http://www.peerinstruction.net/users/sign_up the issue is that I have updated the css yet it is not being actively reflected on the site, it just shows a textbox, with some edited/custom fonts. I have attached the css file in the following gist - https://gist.github.com/f74b626c54ecbb60bbde The signup page controller: !!! Strict %html %head %title= yield(:title) || "Untitled" = stylesheet_link_tag 'application', 'web-app-theme/base', 'web-app-theme/themes/activo/style', 'web-app-theme/override' = javascript_include_tag :defaults = csrf_meta_tag = yield(:head) %body #container #header %h1 %a{:href => "/"} Peer Instruction Network #user-navigation %ul.wat-cf %li .content.login .flash - flash.each do |type, message| %div{ :class => "message #{type}" } %p= message = form_for(resource, :as => resource_name, :url => session_path(resource_name), :html => { :class => "form login" }) do |f| .group.wat-cf .left= f.label :email, :class => "label right" .right= f.text_field :email, :class => "text_field" .group.wat-cf .left= f.label :password, :class => "label right" .right= f.password_field :password, :class => "text_field" .group.wat-cf .right %button.button{ :type => "submit" } Login /= link_to "Sign In", destroy_user_session_path #box = yield The signup pages haml file: %h2 .block .content.login .flash - flash.each do |type, message| %div{ :class => "message #{type}" } %p= message = form_for(resource, :as => resource_name, :url => registration_path(resource_name)) do |f| = devise_error_messages! %div = f.label :firstname %br/ = f.text_field :firstname %div = f.label :middlename %br/ = f.text_field :middlename %div = f.label :lastname %br/ = f.text_field :lastname %div = f.label :email %br/ = f.email_field :email %div = f.label :password %br/ = f.password_field :password %div = f.label :academic %br/ = f.text_field :academic %div= f.submit "Continue" = render :partial => "devise/shared/links" I used web-app-theme to create an activo theme and then modify it.

    Read the article

  • Why won't VS2010 RC use my existing types when I add a service reference?

    - by Johan Driessen
    I have a huge problem getting services references in VS2010 RC to use existing assemblies. Even though I have a class library with all the data contracts (classes marked with DataContract and properties with DataMember) that is shared between the service project and the consuming project (which is a class library), when I add a service reference, the data contracts are regenerated withing the service reference instead of using the existing types. When I was using VS2010 beta 2, this worked fine, and I have existing service references using the very same data contracts. But if I add a new service reference, or even update an old one, it won't use the existing types anymore. I have made a mini-test-solution, with one service, one data contract type and one console app as a consumer (all in the same solution), and there it seems to work, but that's no great comfort to me. Is there any way to see why it can't use the existing types? Edit to clearify. It works to generate the proxy classes with svcutil.exe, and point to the data contracts dll, like this: svcutil.exe http://localhost/MyService.svc /reference:[Path To DataContracts]\DataContracts.dll /n:*,MyProject.MyServiceReference /ct:System.Collections.Generic.List`1 The question is, what possible reason could there be for Visual Studio to generate its own datacontracts instead of using the existing ones even though the "reuse" checkbox is checked and the datacontracts assembly is referenced.

    Read the article

  • How do attribute classes work?

    - by AaronLS
    My searches keep turning up only guides explaining how to use and apply attributes to a class. I want to learn how to create my own attribute classes and the mechanics of how they work. How are attribute classes instantiated? Are they instantiated when the class they are applied to is instantiated? Is one instantiated for each class instantiated that it is applied to? E.g. if I apply the SerializableAttribute class to a MyData class, and I instantiate 5 MyData instances, will there be 5 instances of the SerializbleAttribute class created behind the scenes? Or is there just one instance shared between all of them? How do attribute class instances access the class they are associated with? How does a SerializableAttribute class access the class it is applied to so that it can serialize it's data? Does it have some sort of SerializableAttribute.ThisIsTheInstanceIAmAppliedTo property? :) Or does it work in the reverse direction that whenever I serialize something, the Serialize function I pass the MyClass instance to will reflectively go through the Attributes and find the SerialiableAttribute instance?

    Read the article

  • Does HttpListener work well on Mono?

    - by billpg
    Hi everyone. I'm looking to write a small web service to run on a small Linux box. I prefer to code in C#, so I'm looking to use Mono. I don't want the overhead of running a full web server or Mono's version of ASP.NET. I'm thinking of having a single process with a thread dealing with each client connection. Shared memory between threads instead of a database. I've read a little on Microsoft's version of HttpListener and how it works with the Http.sys driver. Alas, Mono's documentation on this class is just the automated class interface with no discussion of how it works under the hood. (Linux doesn't have Http.sys, so I imagine it's implemented substantially differently.) Could anyone point me towards some resources discussing this module please? Many thanks, Bill, billpg.com (A little background to my question for the interested.) Some time ago, I asked this question, interested in keeping a long conversation open with lots of back-and-forth. I had settled on designing my own ad-hoc protocol, but people I spoke to really wanted a REST interface, even at the cost of the "Okay, send your command now" signal. So, I wondered about running ASP.NET on a Linux/Mono server, but stumbled upon HttpListener. This seemed ideal, as each "conversation" could run in a separate thread. The thread that calls HttpListener in a loop can look for which thread each incomming connection is for and pass the reference to that thread. The alternative for an ASP.NET driven service, would be to have the ASPX code pick up the state from a database, and write back the new state when it finishes. Yes, it would work, but that's a lot of overhead.

    Read the article

  • How to make a mapped field inherited from a superclass transient in JPA?

    - by Russ Hayward
    I have a legacy schema that cannot be changed. I am using a base class for the common features and it contains an embedded object. There is a field that is normally mapped in the embedded object that needs to be in the persistence id for only one (of many) subclasses. I have made a new id class that includes it but then I get the error that the field is mapped twice. Here is some example code that is much simplified to maintain the sanity of the reader: @MappedSuperclass class BaseClass { @Embedded private Data data; } @Entity class SubClass extends BaseClass { @EmbeddedId private SubClassId id; } @Embeddable class Data { private int location; private String name; } @Embeddable class SubClassId { private int thingy; private int location; } I have tried @AttributeOverride but I can only get it to rename the field. I have tried to set it to updatable = false, insertable = false but this did not seem to work when used in the @AttributeOverride annotation. See answer below for the solution to this issue. I realise I could change the base class but I really do not want to split up the embedded object to separate the shared field as it would make the surrounding code more complex and require some ugly wrapping code. I could also redesign the whole system for this corner case but I would really rather not. I am using Hibernate as my JPA provider.

    Read the article

  • .NET assembly loading problem

    - by Simon
    I'm maintaining the build process for our application which consist of an ASP.Net application, two different Win32 services and other sysadmin related applications. I want to end up with the following configuration to be used both when debugging & deploying. libraires/ -- Contains shared assemblies used by all other apps. web/ -- ASP.Net site service1/ -- Win32 service 1 (seen under the service control manager) service2/ -- Win32 service 2 adminstuff/ -- Sysadmin / support stuff used for troubleshooting The problem is assembly probing privatePath in the app.config does not support relative directories outside the application root. Ie: can't use ../libraries. Very frustating... If I strong name our assemblies, I could use codeBase config element which seems to support absolute path but you need to specify each assembly individually. I also tried hooking into AppDomain.AssemblyResolve event, but I'm getting FileNotFoundException from the .Net Fusion before I can even register the event handler in Main(). I don't like the idea of registering the assemblies in the GAC. Too much hassle when deploying / upgrading application. Is there another to do this without having the specify the path of each requiered assembly ?

    Read the article

  • Cache consistency & spawning a thread

    - by Dave Keck
    Background I've been reading through various books and articles to learn about processor caches, cache consistency, and memory barriers in the context of concurrent execution. So far though, I have been unable to determine whether a common coding practice of mine is safe in the strictest sense. Assumptions The following pseudo-code is executed on a two-processor machine: int sharedVar = 0; myThread() { print(sharedVar); } main() { sharedVar = 1; spawnThread(myThread); sleep(-1); } main() executes on processor 1 (P1), while myThread() executes on P2. Initially, sharedVar exists in the caches of both P1 and P2 with the initial value of 0 (due to some "warm-up code" that isn't shown above.) Question Strictly speaking – preferably without assuming any particular CPU – is myThread() guaranteed to print 1? With my newfound knowledge of processor caches, it seems entirely possible that at the time of the print() statement, P2 may not have received the invalidation request for sharedVar caused by P1's assignment in main(). Therefore, it seems possible that myThread() could print 0. References These are the related articles and books I've been reading. (It wouldn't allow me to format these as links because I'm a new user - sorry.) Shared Memory Consistency Models: A Tutorial hpl.hp.com/techreports/Compaq-DEC/WRL-95-7.pdf Memory Barriers: a Hardware View for Software Hackers rdrop.com/users/paulmck/scalability/paper/whymb.2009.04.05a.pdf Linux Kernel Memory Barriers kernel.org/doc/Documentation/memory-barriers.txt Computer Architecture: A Quantitative Approach amazon.com/Computer-Architecture-Quantitative-Approach-4th/dp/0123704901/ref=dp_ob_title_bk

    Read the article

  • permutations gone wrong

    - by vbNewbie
    I have written code to implement an algorithm I found on string permutations. What I have is an arraylist of words ( up to 200) and I need to permutate the list in levels of 5. Basically group the string words in fives and permutated them. What I have takes the first 5 words generates the permutations and ignores the rest of the arraylist? Any ideas appreciated. Private Function permute(ByVal chunks As ArrayList, ByVal k As Long) As ArrayList ReDim ItemUsed(k) pno = 0 Permutate(k, 1) Return chunks End Function Private Shared Sub Permutate(ByVal K As Long, ByVal pLevel As Long) Dim i As Long, Perm As String Perm = pString ' Save the current Perm ' for each value currently available For i = 1 To K If Not ItemUsed(i) Then If pLevel = 1 Then pString = chunks.Item(i) 'pString = inChars(i) Else pString = pString & chunks.Item(i) 'pString += inChars(i) End If If pLevel = K Then 'got next Perm pno = pno + 1 SyncLock outfile outfile.WriteLine(pno & " = " & pString & vbCrLf) End SyncLock outfile.Flush() Exit Sub End If ' Mark this item unavailable ItemUsed(i) = True ' gen all Perms at next level Permutate(K, pLevel + 1) ' Mark this item free again ItemUsed(i) = False ' Restore the current Perm pString = Perm End If Next K above is = to 5 for the number of words in one permutation but when I change the for loop to the arraylist size I get an error of index out of bounds

    Read the article

  • SQL Concurrent test update question

    - by ptoinson
    Howdy Folks, I have a SQLServer 2008 database in which I have a table for Tags. A tag is just an id and a name. The definition of the tags table looks like: CREATE TABLE [dbo].[Tag]( [ID] [int] IDENTITY(1,1) NOT NULL, [Name] [varchar](255) NOT NULL CONSTRAINT [PK_Tag] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ) Name is also a unique index. further I have several processes adding data to this table at a pretty rapid rate. These processes use a stored proc that looks like: ALTER PROC [dbo].[lg_Tag_Insert] @Name varchar(255) AS DECLARE @ID int SET @ID = (select ID from Tag where Name=@Name ) if @ID is null begin INSERT Tag(Name) VALUES (@Name) RETURN SCOPE_IDENTITY() end else begin return @ID end My issues is that, other than being a novice at concurrent database design, there seems to be a race condition that is causing me to occasionally get an error that I'm trying to enter duplicate keys (Name) into the DB. The error is: Cannot insert duplicate key row in object 'dbo.Tag' with unique index 'IX_Tag_Name'. This makes sense, I'm just not sure how to fix this. If it where code I would know how to lock the right areas. SQLServer is quite a different beast. First question is what is the proper way to code this 'check, then update pattern'? It seems I need to get an exclusive lock on the row during the check, rather than a shared lock, but it's not clear to me the best way to do that. Any help in the right direction will be greatly appreciated. Thanks in advance.

    Read the article

  • Need help with WCF design

    - by Jason
    I have been tasked with creating a set of web services. We are a Microsoft shop, so I will be using WCF for this project. There is an interesting design consideration that I haven't been able to figure out a solution for yet. I'll try to explain it with an example: My WCF service exposes a method named Foo(). 10 different users call Foo() at roughly the same time. I have 5 special resources called R1, R2, R3, R4, and R5. We don't really need to know what the resource is, other than the fact that a particular resource can only be in use by one caller at a time. Foo() is responsible to performing an action using one of these special resources. So, in a round-robin fashion, Foo() needs to find a resource that is not in use. If no resources are available, it must wait for one to be freed up. At first, this seems like an easy task. I could maybe create a singleton that keeps track of which resources are currently in use. The big problem is the fact that I need this solution to be viable in a web farm scenario. I'm sure there is a good solution to this problem, but I've just never run across this scenario before. I need some sort of resource tracker / provider that can be shared between multiple WCF hosts. Any ideas from the architects out there would be greatly appreciated!

    Read the article

  • Quickest algorithm for finding sets with high intersection

    - by conradlee
    I have a large number of user IDs (integers), potentially millions. These users all belong to various groups (sets of integers), such that there are on the order of 10 million groups. To simplify my example and get to the essence of it, let's assume that all groups contain 20 user IDs (i.e., all integer sets have a cardinality of 100). I want to find all pairs of integer sets that have an intersection of 15 or greater. Should I compare every pair of sets? (If I keep a data structure that maps userIDs to set membership, this would not be necessary.) What is the quickest way to do this? That is, what should my underlying data structure be for representing the integer sets? Sorted sets, unsorted---can hashing somehow help? And what algorithm should I use to compute set intersection)? I prefer answers that relate C/C++ (especially STL), but also any more general, algorithmic insights are welcome. Update Also, note that I will be running this in parallel in a shared memory environment, so ideas that cleanly extend to a parallel solution are preferred.

    Read the article

  • What java web application framework to use?

    - by frohiky
    One of the main products of my company is an Oracle Forms (and Reports) based application, that "needs" to be re-written in another technology. Why? Users want a more rich interface experience, and we want, preferably, to reduce costs with an open source application server. For this (HUGE) project, we intend to use a java web application framework, keep these points in mind: We have: hundreds of tables on our database (the ORM must be as flexible as possible); some logic which is (and will still be) based on PL/SQL procedures/functions/packages; a lot of CRUDs (the application itself is of an considerable size); a demand to work with/generate documents and workflows; an intranet based user environment; We want: to offer a RIA interface experience; use (if possible) an open source app server; a rapid (as possible) development framework; a somewhat mature framework with a "wise" roadmap (and a considerable community support); a MVC approach combined with JS or GWT widgets (e.g. Vaadin or SmartGWT); Well, in the past weeks I've read a lot of posts, Q&As on stackoverflow, and much more: Wicket, JSF, Tapestry, Grails, GWT, Struts2, Play, Spring, Seam, Echo, .... the list goes on! I've even researched about Alfresco..! The obvious question: Which one to use? At this time, any insight, recommendation, shared experience, advice will be more then welcome!

    Read the article

  • ASP.NET MVC static-asset aides/practices

    - by shannon
    I want to keep assets that are only used by one view in a view-specific folder, so my Search.aspx properly finds images/*.jpg, and helps me maintain my convention: ~/Areas/Candidate/Views/Job/Search.aspx -> ~/Assets/Candidate/Job/Search/images/*.jpg Perhaps with the ability to easily reference controller- or area-common assets manually or automatically: ~/Assets/Candidate/Job/images/*.jpg ~/Assets/Candidate/images/*.jpg If you wonder why I'm doing this, then speak up; I'm probably missing something. But here's why: I don't want stale static assets sitting in my ASP.NET MVC projects, which I expect to be an automatic outcome of the ~/Assets/Images folder: i.e. As a shared asset loses its last reference-count, who knows to delete it, especially with it being so difficult to trace content link validity in MVC projects? How do you, personally, do this? I can imagine, for example: Implement HtmlHelper extension methods for URL-generation. Extending ViewPage and ViewMasterPage with URL-generation methods. Implementing an inbound request filter to search related folders for static assets. and, are there good libraries out there for this? For example, something that also automatically appends timestamps for .JS and .CSS files, writes the / tags for me, and maybe even that allows me to inject includes in the head section from outside head code?

    Read the article

  • MongoDB - proper use of collections?

    - by zmg
    In Mongo my understanding is that you can have databases and collections. I'm working on a social-type app that will have blogs and comments (among other things) and had previously be using MySQL and pretty heavy partitioning in an attempt to limit possible concurrency issues. With MySQL I've stuffed all my user data into a _user database with several tables to further partition the data (blogs, pages, etc). My immediate reaction with Mongo would be to create a 'users' database with one collection per user. In this way user 'zach' blog entries would go into the 'zach' collection with associated comments and such becoming sub-objects in the same collection. Basically like dynamically creating one table per user in MySQL, but apparently without the complexity and limitations that might impose. Of course since I haven't really used Mongo before I'm having trouble gauging the (ahem..) quality of this idea and the potential problems it might cause down the road. I'd like user data to be treated a lot like a users directory in a *nix environment where user created/non-shared (mostly) gets put into one place (currently with MySQL that would be the appname_users as mentioned above). Most of the users data will be specific to the users page(s). Some of the user data which is queried across all site users (searchable user profiles) is currently kept in a separate database/table and I expect things like this could be put into a appname_system database and be broken up into collections and/or application specific databases (appname_profiles). Anyway, since the available documentation on this is currently a little thin and my experience is extremely limited I thought I might find a little guidance from someone with a better working understanding of the system. On the plus side I'd really already been attempting to treat MySQL as a schema-less document-store and doing this with Mongo seems much more intuitive/sane/rational so I'm really looking forward to getting started. Thanks, Zach

    Read the article

  • Using WCF to expose underlying process

    - by Steven
    I think I must be a little dull because I'm having so much difficulty with this. I use WCF for pretty much everything in-house, it's the most appropriate technology. I have a new Silverlight 3 app that is connecting to the WCF service and that's working fine. Where the problem begins is: Because of the expense in creating the objects within this service and the high correlation of individual objects being shared between clients I want to have a console application that basically gathers/calculates/caches all the data for the service 24/7 and the service basically connects to the console app (or whatever it is) and gets the pre-processed data. eg, think of it in terms of a stock reporting app (which it is). Person A has a portfolio of x, y z Person B has a portfolio of x, q, z, r The service needs to provide updated metrics on how their portfolio is performing. So instead of every 1 second processing person A, then person B, the app independently gathers the stock price and persons position information into memory and the service just queries the in memory result. Thanks for your help, I really am feeling dumb right now.

    Read the article

  • Is it safe to reuse javax.xml.ws.Service objects

    - by Noel Ang
    I have JAX-WS style web service client that was auto-generated with the NetBeans IDE. The generated proxy factory (extends javax.xml.ws.Service) delegates proxy creation to the various Service.getPort methods. The application that I am maintaining instantiates the factory and obtains a proxy each time it calls the targetted service. Creating the new proxy factory instances repeatedly has been shown to be expensive, given that the WSDL documentation supplied to the factory constructor, an HTTP URI, is re-retrieved for each instantiation. We had success in improving the performance by caching the WSDL. But this has ugly maintenance and packaging implications for us. I would like to explore the suitability of caching the proxy factory itself. Is it safe, e.g., can two different client classes, executing on the same JVM and targetting the same web service, safely use the same factory to obtain distinct proxy objects (or a shared, reentrant one)? I've been unable to find guidance from either the JAX-WS specification nor the javax.xml.ws API documentation. The factory-proxy multiplicity is unclear to me. Having Service.getPort rather than Service.createPort does not inspire confidence.

    Read the article

  • How to run White + SL4 UATs through TeamCity?

    - by Duncan Bayne
    After experiencing a series of unpleasant issues with TFS, including source code orruption and project management inflexibility, we (meaning the project team of which I'm a part) have decided to move from TFS 2010 to TeamCity + SVN + V1. I've managed to get our MSTest component and unit tests running as part of every build. However, our UATs are failing, and I was hoping for some advice from the TeamCity community as to best practices w.r.t. running web servers and interacting with the desktop. Each of our UAT fixtures starts a web server to host the site, like this: public static void StartWebServer() { var pathToSite = @"C:\projects\myproject\FrontEnd\MyProject.FrontEnd.Web"; var webServer = new Process { StartInfo = new ProcessStartInfo { Arguments = string.Format("/port:9150 /path:\"{0}\"", pathToSite), FileName = @"C:\Program Files (x86)\Common Files\microsoft shared\DevServer\10.0\WebDev.WebServer40.EXE" } }; webServer.Start(); } Needless to say, this doesn't work when running through TeamCity, as the pathToSite value is different each time. I'm hoping there is a way of determining the path into which the the code is checked out prior to building? That would allow me to point the web server at the right place. The other issue is that our UATs use White to drive the Silverlight UI through an instance of Internet Explorer: _browserWindow = InternetExplorer.Launch("http://localhost:9150/index.html#/Home", "Home - Windows Internet Explorer"); _document = _browserWindow.SilverlightDocument; I've ensured that the TeamCity service is granted the ability to interact with the desktop, and I've set the build agent machine up to log in automatically (an open session is a pre-requisite for White to work properly). Is that all I need to do or are there additional steps required?

    Read the article

  • SSH traffic over openvpn freezes under weird circumstances

    - by user289581
    I have an openvpn (version 2.1_rc15 at both ends) connection setup between two gentoo boxes using shared keys. it works fine for the most part. I use mysql, http, ftp, scp over the vpn with no problems. But when I ssh from the client to the server over the vpn, weird things happen. I can login, i can execute some commands. But if i try to run an ncurses application like top, or i try to cat a file, the connection will stall and I'll have to sever the ssh session. I can, for example, execute "echo blah; echo .; echo blah" and it will output the three lines of text over the ssh session fine. But if i execute "cat /etc/motd" the session will freeze the moment I press enter. While it seems like a terminal emulation problem it makes no sense why using the vpn would affect the ability for ssh to render things correctly. I am at a loss to explain why everything else works, including scp, but ssh just breaks over the vpn. Any thoughts ?

    Read the article

  • C++ error: expected initializer before ‘&’ token

    - by Werner
    Hi, the following piece of C++ code compiled two years ago in a suse 10.1 Linux machine. #ifndef DATA_H #define DATA_H #include <iostream> #include <iomanip> inline double sqr(double x) { return x*x; } enum Direction { X,Y,Z }; inline Direction next(const Direction d) { switch(d) { case X: return Y; case Y: return Z; case Z: return X; } } inline ostream& operator<<(ostream& os,const Direction d) { switch(d) { case X: return os << "X"; case Y: return os << "Y"; case Z: return os << "Z"; } } ... ... Now, I am trying to compile it on Ubuntu 9.10 and I get the error: data.h:20: error: expected initializer before ‘&’ token which is referred to the line of: inline ostream& operator<<(ostream& os,const Direction d) the g++ used on this machine is: Using built-in specs. Target: x86_64-linux-gnu Configured with: ../src/configure -v --with-pkgversion='Ubuntu 4.4.1-4ubuntu9' --with-bugurl=file:///usr/share/doc/gcc-4.4/README.Bugs --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr --enable-shared --enable-multiarch --enable-linker-build-id --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.4 --program-suffix=-4.4 --enable-nls --enable-clocale=gnu --enable-libstdcxx-debug --enable-objc-gc --disable-werror --with-arch-32=i486 --with-tune=generic --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu Thread model: posix gcc version 4.4.1 (Ubuntu 4.4.1-4ubuntu9) Could you give me some hint about this error? Thanks

    Read the article

  • Action Filter Dependency Injection in ASP.NET MVC 3 RC2 with StructureMap

    - by Ben
    Hi, I've been playing with the DI support in ASP.NET MVC RC2. I have implemented session per request for NHibernate and need to inject ISession into my "Unit of work" action filter. If I reference the StructureMap container directly (ObjectFactory.GetInstance) or use DependencyResolver to get my session instance, everything works fine: ISession Session { get { return DependencyResolver.Current.GetService<ISession>(); } } However if I attempt to use my StructureMap filter provider (inherits FilterAttributeFilterProvider) I have problems with committing the NHibernate transaction at the end of the request. It is as if ISession objects are being shared between requests. I am seeing this frequently as all my images are loaded via an MVC controller so I get 20 or so NHibernate sessions created on a normal page load. I added the following to my action filter: ISession Session { get { return DependencyResolver.Current.GetService<ISession>(); } } public ISession SessionTest { get; set; } public override void OnResultExecuted(System.Web.Mvc.ResultExecutedContext filterContext) { bool sessionsMatch = (this.Session == this.SessionTest); SessionTest is injected using the StructureMap Filter provider. I found that on a page with 20 images, "sessionsMatch" was false for 2-3 of the requests. My StructureMap configuration for session management is as follows: For<ISessionFactory>().Singleton().Use(new NHibernateSessionFactory().GetSessionFactory()); For<ISession>().HttpContextScoped().Use(ctx => ctx.GetInstance<ISessionFactory>().OpenSession()); In global.asax I call the following at the end of each request: public Global() { EndRequest += (sender, e) => { ObjectFactory.ReleaseAndDisposeAllHttpScopedObjects(); }; } Is this configuration thread safe? Previously I was injecting dependencies into the same filter using a custom IActionInvoker. This worked fine until MVC 3 RC2 when I started experiencing the problem above, which is why I thought I would try using a filter provider instead. Any help would be appreciated Ben P.S. I'm using NHibernate 3 RC and the latest version of StructureMap

    Read the article

  • Phonegap/Cordova geolocation not working on Android

    - by Kreeki
    I'm having a trouble to get Geolocation working on Android in both emulator (even when I geo fix over telnet) and on device. Works on iOS, WP8 and in the browser. When I ask device for location using the following code, I always get an error (in my case custom Retrieving your position failed for unknown reason. with null both error code and error message). Related code: successHandler = (position) -> resolve App.Location.create lat: position.coords.latitude lng: position.coords.longitude errorHandler = (error) -> error = switch error.code when 1 App.LocationError.create message: 'You haven\'t shared your location.' when 2 App.LocationError.create message: 'Couldn\'t detect your current location.' when 3 App.LocationError.create message: 'Retrieving your position timeouted.' else App.LocationError.create message: 'Retrieving your position failed for unknown reason. Error code: ' + error.code + '. Error message: ' + error.message reject(error) options = maximumAge: Infinity # I also tried with 0 timeout: 60000 enableHighAccuracy: true navigator.geolocation.getCurrentPosition(successHandler, errorHandler, options) platforms/android/AndroidManifest.xml <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" /> <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" /> <uses-permission android:name="android.permission.ACCESS_LOCATION_EXTRA_COMMANDS" /> www/config.xml (just in case) <feature name="Geolocation"> <param name="android-package" value="org.apache.cordova.GeoBroker" /> </feature> Using Cordova 3.1.0. Testing on Android 4.2. Plugin installed. Cordova.js included in index.html (other plugins like InAppBrowser are working fine). $ cordova plugins ls [ 'org.apache.cordova.console', 'org.apache.cordova.device', 'org.apache.cordova.dialogs', 'org.apache.cordova.geolocation', 'org.apache.cordova.inappbrowser', 'org.apache.cordova.vibration' ] I'm clueless. Am I missing something?

    Read the article

  • Music meta missing from Facebook /home

    - by Peter Watts
    When somebody shares a Spotify playlist, the attachment is missing from the Graph API. What is shown in Facebook: What is returned by the Graph API: { "id": "********_******", "from": { "name": "*****", "id": "*****" }, "message": "Refused's setlist from last night's secret show in Sweden...", "icon": "http://photos-c.ak.fbcdn.net/photos-ak-snc1/v85005/74/174829003346/app_2_174829003346_5511.gif", "actions": [ { "name": "Comment", "link": "http://www.facebook.com/*****/posts/*****" }, { "name": "Like", "link": "http://www.facebook.com/*****/posts/*****" }, { "name": "Get Spotify", "link": "http://www.spotify.com/redirect/download-social" } ], "type": "link", "application": { "name": "Spotify", "canvas_name": "get-spotify", "namespace": "get-spotify", "id": "174829003346" }, "created_time": "2012-03-01T22:24:28+0000", "updated_time": "2012-03-01T22:24:28+0000", "likes": { "data": [ { "name": "***** *****", "id": "*****" } ], "count": 1 }, "comments": { "count": 0 }, "is_published": true } There's absolutely no reference to an attachment, other than the fact the type is 'link' and the application is Spotify. If you want to test, Spotify's page (http://graph.facebook.com/spotify/feed) usually has a playlist or two embedded (and missing from Graph API). Also if you filter your home feed to just Spotify stories (http://graph.facebook.com/me/home?filter=app_174829003346), you'll get a bunch of useless stories without attachments (assuming your friends shared music recently) Anyone have any ideas how to access the playlist details, or is it unavailable to third party developers (if so, this is a very a bad user experience, because the story makes no sense without the attachment). I am able to fetch scrobbles without any trouble using the user_actions.listens. Also, if there is a recent activity story, e.g. "Peter listened to The Shins", I am able to get information about the band. The problem only happens on attachments.

    Read the article

  • In IIS6, how to provide authenticated access to static files on remote server

    - by frankadelic
    We have a library of ZIP files that we would like to make available for download at an ASP.NET site. The files are sitting on a NAS device that is accessible from out web farm. Here is our initial strategy: Map an IIS virtual directory to the shared drive at path /zipfiles Users can download the zip files when given the URL However, if users share links to the files, anyone can download them. We would instead like to make use of the ASP.NET forms authentication in our site to validate users' requests before initiating the file transfer. A few problems: A request for a zip file is handled by IIS, not ASP.NET. So it is not subject to forms authentication. In addition, we don't want ASP.NET to handle the request, because it uses up an ASP.NET thread and is not scalable for download of large files. So, configuring the asp.net dll to handle *.zip requests is not an option. Any ideas on this? One idea we've tossed around is this: Initial request for download will be for an ashx handler. This handler will, after authentication, generate a download token which is saved to a database. Then, the user is redirected to the file with token appended in QueryString (e.g. /files/xyz.zip?token=123456789). An ISAPI plugin will be used to check the token. Also, the token will expire after x amount of time. Any thoughts on this? I have not implemented an ISAPI plugin so I'm not sure if this will even work. I would like to avoid custom coding since security is an issue and I'd prefer to use a time-tested solution.

    Read the article

  • TFS Solution build cascading to several other builds even when common components were not modified

    - by Bob Palmer
    Hey all, here is the issue I am currently trying to work through. We are using Team Foundation Server 2008, and utilizing the automated build support out of the box. We have one very large project that encompasses a number of interrelated components and web sites, each of which is set up as a Visual Studio solution file. Many of these solutions are highly interrelated since they may contain applications, or contain common libraries or shared components. We have roughly 20 or so applications, three large web sites, and about 20 components. Each solution may include projects from other solutions. For example, a solution for a console app would also include the project files for all of the components it utilizes, since we need to ensure that when someone changes a component and rebuilds it, it is reflected in all of the projects that consume that component, and we can make sure nothing was broken. We have build projects for each solution, whether that's an application, component, or web site. For this example, we will call them solutions 01, 02, and 03. These reference multiple projects (both their own core project and test projects, plus the projects relating to various components). Solution 01 has projects A, B, and C. Solution 02 has projects C, D, and E. Solution 03 has projects E, F, and G. Now, for the problem. If I modify project A, the system will end up rebuilding all three solutions. Worse, all thirty solutions reference common projects used for data access (let's call it project H). Because they all share one project in common, if I modify any solution in my stack, even if it does not touch project H, I still end up kicking off every single build script. Any thoughts on how to address this? Ideally I would only want to kick off builds where their constituant projects were directly modified - i.e in the example below, if I modified project C, I would only rebuild solutions 01 and 02. Thanks!

    Read the article

  • rails application on production not working

    - by Steven
    i have a rails application on production which is running using mongrel, I can successfully start the mogrel for the application but when i try to access the application on the URL it is not responding... it is just hanging. This is the mongrel log... but when I hit xxx.xxx.xxx.xx:3001 it is not showing the website but on developent is working fine. ** Starting Mongrel listening at 0.0.0.0:3001 ** Initiating groups for "name.co.za":"name.co.za". ** Changing group to "name.co.za". ** Changing user to "name.co.za". ** Starting Rails with production environment... ** Rails loaded. ** Loading any Rails specific GemPlugins ** Signals ready. TERM = stop. USR2 = restart. INT = stop (no restart). ** Rails signals registered. HUP = reload (without restart). It might not work well. ** Mongrel 1.1.5 available at 0.0.0.0:3001 ** Writing PID file to /home/name.co.za/shared/log/mongrel.pid

    Read the article

< Previous Page | 377 378 379 380 381 382 383 384 385 386 387 388  | Next Page >