Search Results

Search found 56342 results on 2254 pages for 'versant object database'.

Page 967/2254 | < Previous Page | 963 964 965 966 967 968 969 970 971 972 973 974  | Next Page >

  • Calling next value of a sequence in jpa

    - by Javi
    Hello, I have a class mapped as an Entity to persist it in a database. I have an id field as the primary key so every time the object is persisted the value of the id is retrieved from the sequence "myClass_pk_seq", a code like the following one. @Entity @Table(name="myObjects") public class MyClass { @Id @GeneratedValue(strategy=GenerationType.SEQUENCE, generator="sequence") @SequenceGenerator(name="sequence", sequenceName="myClass_pk_seq", allocationSize=1) @Column(name="myClassId") private Integer id; ... } I need to get the next value of the sequence myClass_pk_seq to reserve that value (get the next value of the sequence and the current value is incremented), but without saving the object. How can I call the next value of a sequence when it's defined like this? Thanks.

    Read the article

  • ADD COLUMN to sqlite db IF NOT EXISTS - flex/air sqlite?

    - by Adam
    I've got a flex/air app I've been working on, it uses a local sqlite database that is created on the initial application start. I've added some features to the application and in the process I had to add a new field to one of the database tables. My questions is how to I go about getting the application to create one new field that is located in a table that already exists? this is a the line that creates the table stmt.text = "CREATE TABLE IF NOT EXISTS tbl_status ("+"status_id INTEGER PRIMARY KEY AUTOINCREMENT,"+" status_status TEXT)"; And now I'd like to add a status_default field. thanks! Thanks - MPelletier I've add the code you provided and it does add the field, but now the next time I restart my app I get an error - 'status_default' already exists'. So how can I go about adding some sort of a IF NOT EXISTS statement to the line you provided?

    Read the article

  • C# where does the dbml file come from?

    - by 5YrsLaterDBA
    Learning C# and learing Linq now. have lots of qestions about it. Basically I need a step by step tutorial. I suppose the dbml file is the configuration file of the database. I double click it and VS will open it with kind of design diagram. I can create/delete/modify table here? I can use add new item to add the Linq to SQL Classes to get a dbml file? what's next? generate tables in database? generate sql script? generate cs files? when? how?

    Read the article

  • Argument exception after trying to use TryGetObjectByKey

    - by Rickjaah
    Hi, I'm trying to retrieve an object from my database using entity (framework 4) When I use the following code it gives an ArgumentException: An item with the same key has already been added. if (databaseContext.TryGetObjectByKey(entityKey, out result)) { return (result != null && result is TEntityObject) ? result as TEntityObject : null; } else { return null; } When I check the objectContext, I see the entities, but only if I enumerate the specific list of entities manually using VS2010, it works. What am I missing? Do I have to do something else before i can get the item from the database? I have lazy loading set to true. I searched google, but could not find any results, the same for the msdn library

    Read the article

  • When constructing a Bitmap with Bitmap.FromHbitmap(), how soon can the original bitmap handle be del

    - by GBegen
    From the documentation of Image.FromHbitmap() at http://msdn.microsoft.com/en-us/library/k061we7x%28VS.80%29.aspx : The FromHbitmap method makes a copy of the GDI bitmap; so you can release the incoming GDI bitmap using the GDIDeleteObject method immediately after creating the new Image. This pretty explicitly states that the bitmap handle can be immediately deleted with DeleteObject as soon as the Bitmap instance is created. Looking at the implementation of Image.FromHbitmap() with Reflector, however, shows that it is a pretty thin wrapper around the GDI+ function, GdipCreateBitmapFromHBITMAP(). There is pretty scant documentation on the GDI+ flat functions, but http://msdn.microsoft.com/en-us/library/ms533971%28VS.85%29.aspx says that GdipCreateBitmapFromHBITMAP() corresponds to the Bitmap::Bitmap() constructor that takes an HBITMAP and an HPALETTE as parameters. The documentation for this version of the Bitmap::Bitmap() constructor at http://msdn.microsoft.com/en-us/library/ms536314%28VS.85%29.aspx has this to say: You are responsible for deleting the GDI bitmap and the GDI palette. However, you should not delete the GDI bitmap or the GDI palette until after the GDI+ Bitmap::Bitmap object is deleted or goes out of scope. Do not pass to the GDI+ Bitmap::Bitmap constructor a GDI bitmap or a GDI palette that is currently (or was previously) selected into a device context. Furthermore, one can see the source code for the C++ portion of GDI+ in GdiPlusBitmap.h that the Bitmap::Bitmap() constructor in question is itself a wrapper for the GdipCreateBitmapFromHBITMAP() function from the flat API: inline Bitmap::Bitmap( IN HBITMAP hbm, IN HPALETTE hpal ) { GpBitmap *bitmap = NULL; lastResult = DllExports::GdipCreateBitmapFromHBITMAP(hbm, hpal, &bitmap); SetNativeImage(bitmap); } What I can't easily see is the implementation of GdipCreateBitmapFromHBITMAP() that is the core of this functionality, but the two remarks in the documentation seem to be contradictory. The .Net documentation says I can delete the bitmap handle immediately, and the GDI+ documentation says the bitmap handle must be kept until the wrapping object is deleted, but both are based on the same GDI+ function. Furthermore, the GDI+ documentation warns against using a source HBITMAP that is currently or previously selected into a device context. While I can understand why the bitmap should not be selected into a device context currently, I do not understand why there is a warning against using a bitmap that was previously selected into a device context. That would seem to prevent use of GDI+ bitmaps that had been created in memory using standard GDI. So, in summary: Does the original bitmap handle need to be kept around until the .Net Bitmap object is disposed? Does the GDI+ function, GdipCreateBitmapFromHBITMAP(), make a copy of the source bitmap or merely hold onto the handle to the original? Why should I not use an HBITMAP that was previously selected into a device context?

    Read the article

  • Silverlight 4 HttpWebRequest user agent string is null

    - by Dan B
    The problem I have a page with a silverlight object. It attempts to retrieve XML from the page. But I am struggling with a security exception. I have this code working brilliantly in WPF. When using a website hosting a silverlight application with the same code, the user agent string of the HttpRequest object is null (and seemingly cannot be set). In fact there is no header information at all - this causes a security exception when attempting to make my asynchronous call. The question Why is the user-agent string (and header information) null in my silverlight 4 application when making an asynchronous call using HttpWebRequest? Thanks in advance!

    Read the article

  • Why do I get this error when I try to push my SQLite3 to Postgresql (via Taps) on Cedar Stack?

    - by rhodee
    I've done quite a bit of research on Heroku Dev Center and I am now looking to the community for help. Here is my problem. I can not push my db to Heroku Cedar Stack. I am trying to migrate a sqlite database to postgresql via Taps gem. When I am ready to deploy I run: bundle install --without production heroku run db:push I get the following result: Running db:seed attached to terminal... up, run.17 sh: db:seed: not found heroku run rake db:migrate And when I run the migration: heroku run rake db:migrate I get the following: Running rake db:migrate attached to terminal... up, run.18 rake aborted! No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb) /usr/local/lib/ruby/1.9.1/rake.rb:2367:in `raw_load_rakefile' /usr/local/lib/ruby/1.9.1/rake.rb:2007:in `block in load_rakefile' /usr/local/lib/ruby/1.9.1/rake.rb:2058:in `standard_exception_handling' /usr/local/lib/ruby/1.9.1/rake.rb:2006:in `load_rakefile' /usr/local/lib/ruby/1.9.1/rake.rb:1991:in `run' /usr/local/bin/rake:31:in `<main>' Everytime I push to Heroku (git push heroku master) it fails because my gem file is attempting to install sqlite3 gem-even though its inside of the development and test groups in my Gemfile. My database.yml production environment still points to sqlite adapter even after I have run the following command successfully: heroku config:add BUNDLE_WITHOUT="test development" --app app_name_on_heroku Out of ideas. Please help. If its useful I can post results of my gemfile, heroku ps and logs. Cheers UPDATE: After following @John's direction I now receive the following terminal message. Sending schema Schema: 100% |==========================================| Time: 00:00:07 Sending indexes schema_migrat: 100% |==========================================| Time: 00:00:00 Sending data 4 tables, 6 records schema_migrat: 0% | | ETA: --:--:-- Saving session to push_201111070749.dat.. !!! Caught Server Exception HTTP CODE: 500 Taps Server Error: LoadError: no such file to load -- sequel/adapters/ And the following warnings: ["/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:249:in require'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:249:inblock in tsk_require'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:72:in block in check_requiring_thread'", "<internal:prelude>:10:insynchronize'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:69:in check_requiring_thread'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:249:intsk_require'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/database/connecting.rb:25:in adapter_class'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/database/connecting.rb:54:inconnect'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:119:in connect'", "/app/lib/taps/db_session.rb:14:inconn'", "/app/lib/taps/server.rb:91:in block in <class:Server>'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:865:incall'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:865:in block in route'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:521:ininstance_eval'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:521:in route_eval'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:500:inblock (2 levels) in route!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:497:in catch'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:497:inblock in route!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:476:in each'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:476:inroute!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:601:in dispatch!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:411:inblock in call!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:in instance_eval'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:inblock in invoke'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:in catch'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:ininvoke'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:411:in call!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:399:incall'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/auth/basic.rb:25:in call'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:979:inblock in call'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:1005:in synchronize'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:979:incall'", "/home/heroku_rack/lib/static_assets.rb:9:in call'", "/home/heroku_rack/lib/last_access.rb:15:incall'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/urlmap.rb:47:in block in call'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/urlmap.rb:41:ineach'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/urlmap.rb:41:in call'", "/home/heroku_rack/lib/date_header.rb:14:incall'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/builder.rb:77:in call'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:76:inblock in pre_process'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:74:in catch'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:74:inpre_process'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:57:in process'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:42:inreceive_data'", "/app/.bundle/gems/ruby/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:in run_machine'", "/app/.bundle/gems/ruby/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:inrun'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/backends/base.rb:57:in start'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/server.rb:156:instart'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/controllers/controller.rb:80:in start'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/runner.rb:177:inrun_command'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/runner.rb:143:in run!'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/bin/thin:6:in'", "/usr/ruby1.9.2/bin/thin:19:in load'", "/usr/ruby1.9.2/bin/thin:19:in'"]

    Read the article

  • Marshall a list of objects from VB6 to C#

    - by Andrew
    I have a development which requires the passing of objects between a VB6 application and a C# class library. The objects are defined in the C# class library and are used as parameters for methods exposed by other classes in the same library. The objects all contain simple string/numeric properties and so marshaling has been relatively painless. We now have a requirement to pass an object which contains a list of other objects. If I was coding this in VB6 I might have a class containing a collection as a member variable. In C# I might have a class with a List member variable. Is it possible to construct a C# class in such a way that the VB6 application could populate this inner list and marshal it successfully? I don't have a lot of experience here but I would guess Id have to use an array of Object types.

    Read the article

  • Updating fields of values in a ConcurrentDictionary

    - by rboarman
    I am trying to update entries in a ConcurrentDictionary something like this: class Class1 { public int Counter { get; set; } } class Test { private ConcurrentDictionary<int, Class1> dict = new ConcurrentDictionary<int, Class1>(); public void TestIt() { foreach (var foo in dict) { foo.Value.Counter = foo.Value.Counter + 1; // Simplified example } } } Essentially I need to iterate over the dictionary and update a field on each Value. I understand from the documentation that I need to avoid using the Value property. Instead I think I need to use TryUpdate except that I don’t want to replace my whole object. Instead, I want to update a field on the object. After reading this blog entry on the PFX team blog: Perhaps I need to use AddOrUpdate and simply do nothing in the add delegate. Does anyone have any insight as to how to do this?

    Read the article

  • Memory leak on CollectionView.View.Refresh

    - by Dabblernl
    I have defined my binding thus: <TreeView ItemsSource="{Binding UsersView.View}" ItemTemplate="{StaticResource MyDataTemplate}" /> The CollectionViewSource is defined thus: private ObservableCollection<UserData> users; public CollectionViewSource UsersView{get;set;} UsersView=new CollectionViewSource{Source=users}; UsersView.SortDescriptions.Add( new SortDescription("IsLoggedOn",ListSortDirection.Descending); UsersView.SortDescriptions.Add( new SortDescription("Username",ListSortDirection.Ascending); So far, so good, this works as expected: The view shows first the users that are logged on in alphabetical order, then the ones that are not. However, the IsLoggedIn property of the UserData is updated every few seconds by a backgroundworker thread and then the code calls: UsersView.View.Refresh(); on the UI thread. Again this works as expected: users that log on are moved from the bottom of the view to the top and vice versa. However: Every time I call the Refresh method on the view the application hoards 3,5MB of extra memory, which is only released after application shutdown (or after an OutOfMemoryException...) I did some research and below is a list of fixes that did NOT work: The UserData class implements INotifyPropertyChanged Changing the underlying users collection does not make any difference at all: any IENumerable<UserData as a source for the CollectionViewSource causes the problem. -Changing the ColletionViewSource to a List<UserData (and refreshing the binding) or inheriting from ObservableCollection to get access to the underlying Items collection to sort that in place does not work. I am out of ideas! Help? EDIT: I found it: The Resource MyDataTemplate contains a Label that is bound to a UserData object to show one of its properties, the UserData objects being handed down by the TreeView's ItemsSource. The Label has a ContextMenu defined thus: <ContextMenu Background="Transparent" Width="325" Opacity=".8" HasDropShadow="True"> <PrivateMessengerUI:MyUserData IsReadOnly="True" > <PrivateMessengerUI:MyUserData.DataContext> <Binding Path="."/> </PrivateMessengerUI:MyUserData.DataContext> </PrivateMessengerUI:MyUserData> </ContextMenu> The MyUserData object is a UserControl that shows All properties of the UserData object. In this way the user first only sees one piece of data of a user and on a right click sees all of it. When I remove the MyUserData UserControl from the DataTemplate the memory leak disappears! How can I still implement the behaviour as specified above?

    Read the article

  • MVC null model problem

    - by femi
    hello, i have created two create actions..one to call the create view and the other to process the create view using httppost. when i call the create view, it gets published correctly , dropdowns and all. the problem is that when i fill out the create form and click on the submit button, i get an error; Object reference not set to an instance of an object. My first thoughts are that i am passing a null model to the httppost create action.. How can i check to see if i am passing in a null model to the httppost create action? thanks

    Read the article

  • how to set jqgrid cell color at runtime

    - by anil
    Hi, i am populating a jqgrid from database and one of its columns is a color column like red, blue, etc. Can i set the cell color of this column based on the value coming from database at run time? how should i set formatter in this case? i tried like this but do not work var colorFormatter = function(cellvalue, options, rowObject) { var colorElementString = ''; return colorElementString; colModel: [ { name: 'GroupName', index: 'GroupName', width: 200, align: 'left' }, { name: 'Description', index: 'Description', width: 300, align: 'left' }, { name: 'Color', index: 'Color', width: 60, align: 'left', formatter: colorFormatter}],

    Read the article

  • Django data migration when changing a field to ManyToMany

    - by Ken H
    I have a Django application in which I want to change a field from a ForeignKey to a ManyToManyField. I want to preserve my old data. What is the simplest/best process to follow for this? If it matters, I use sqlite3 as my database back-end. If my summary of the problem isn't clear, here is an example. Say I have two models: class Author(models.Model): author = models.CharField(max_length=100) class Book(models.Model): author = models.ForeignKey(Author) title = models.CharField(max_length=100) Say I have a lot of data in my database. Now, I want to change the Book model as follows: class Book(models.Model): author = models.ManyToManyField(Author) title = models.CharField(max_length=100) I don't want to "lose" all my prior data. What is the best/simplest way to accomplish this? Ken

    Read the article

  • DateTimePicker not updating dataset

    - by Dan
    I'm binding a DateTimePicker control to my dataset (which is linked to a database). However, unless the user changes the date on that control, the dataset seems to contain null for that entry (even though the Value entry of the control isn't null). I've done a bit of googling, and there's a lot of talk about people having troubles with the DateTimePicker not supporting null values. However, I DON'T want it to support a NULL value. The column in my database table is set to "NOT NULL". It's as if the dataset isn't updating itself from the DateTimePicker control unless the user changes the date. I've tried explicitly setting the date for the control in code (using DateTimePicker.Value = DateTime.Now). This still doesn't update the dataset side. Thankyou for any help, Dan.

    Read the article

  • Webbrowser control: auto fill, only works one time, why?

    - by Khou
    The following code loads a page and auto fills in the values. private void button1_Click(object sender, EventArgs e) { //Load page and autofill webBrowser1.Navigate("http://exampledomain.com"); webBrowser1.DocumentCompleted += new WebBrowserDocumentCompletedEventHandler(autoFillDetails); // etc...etc.. } private void autoFillDetails(object sender, WebBrowserDocumentCompletedEventArgs e) { // do auto fill values ((WebBrowser)sender).Document.GetElementById("MY_NAME").SetAttribute("value", "theMynamevalue"); // etc...etc... } Autofill only work one time! After the form has been submitted, and you navigate back to the page previous form page (even when you click the button again), it will no longer auto fill the form values! Note: The the "autoFillDetails" code is executed a second time, 3rd time etc, it still would not auto fill the values. why does it only work one time? what am i doing wrong?

    Read the article

  • MySQL query returns different set of results on two identical databases

    - by 1nsane
    I exported a live MySQL database (running mysql 5.0.45) to a local copy (running mysql 5.1.33) with no errors upon import. There is a view in the database, that when executed locally, returns a different set of data than when executed remotely. It's returning 32 results instead of 63. When I execute the raw sql, the same problem occurs. I've inspected the data in all tables being joined, and the counts are the same. The query is simple and has no where conditions - but about 10 joins. Aside from the differences in mysql versions... I can't find any reason that this query would return different results between databases... since they are effectively exact copies. Has anyone experienced a problem like this before?

    Read the article

  • JAXB XmlID and XmlIDREF annotations (Schema to Java)

    - by kipz
    I am exposing a web service using CXF. I am using the @XmlID and @XmlIDREF JAXB annotations to maintain referential integrity of my object graph during marshalling/unmarshalling. The WSDL rightly contains elements with the xs:id and xs:idref attributes to represent this. On the server side, everything works really nicely. Instances of Types annotated with @XmlIDREF are the same instances (as in ==) to those annotated with the @XmlID annotation. However, when I generate a client with WSDLToJava, the references (those annotated with @XmlIDREF) are of type java.lang.Object. Is there any way that I can customise the JAXB bindings such that the types of references are either java.lang.String (to match the ID of the referenced type) or the same as the referenced type itself?

    Read the article

  • SphinxSearch or a spider - which one to choose?

    - by r2b2
    Hello, here is my problem: We own SiteA and SiteB and they share the same server and database where we have full control. SiteC , siteD and siteE are some of the sites we own as well but reside on a different web hosts. The goal is to create a unified search functionality for all of the sites mentioned above. That is if somebody search for a term in SiteA, the search result will automatically come up with results from SiteB,SiteC,SiteD and Site E too. The search results should be shown under the website they were found in. All these websites content are stored in their own databases. If I use SphinxSearch to index the above sites,I would then require those sites that we dont have complete control with to setup a web service where i can download a database dump or csv file for indexing. Im not quite sure about how a sphider will come into play here so need your opinion. Sphinx or a spider? THanks!

    Read the article

  • RuntimeBinderException with dynamic in C# 4.0

    - by Terence Lewis
    I have an interface: public abstract class Authorizer<T> where T : RequiresAuthorization { public AuthorizationStatus Authorize(T record) { // Perform authorization specific stuff // and then hand off to an abstract method to handle T-specific stuff // that should happen when authorization is successful } } Then, I have a bunch of different classes which all implement RequiresAuthorization, and correspondingly, an Authorizer<T> for each of them (each business object in my domain requires different logic to execute once the record has been authorized). I'm also using a UnityContainer, in which I register various Authorizer<T>'s. I then have some code as follows to find the right record out of the database and authorize it: void Authorize(RequiresAuthorization item) { var dbItem = ChildContainer.Resolve<IAuthorizationRepository>() .RetrieveRequiresAuthorizationById(item.Id); var authorizerType = type.GetType(String.Format("Foo.Authorizer`1[[{0}]], Foo", dbItem.GetType().AssemblyQualifiedName)); dynamic authorizer = ChildContainer.Resolve(type) as dynamic; authorizer.Authorize(dbItem); } Basically, I'm using the Id on the object to retrieve it out of the database. In the background NHibernate takes care of figuring out what type of RequiresAuthorization it is. I then want to find the right Authorizer for it (I don't know at compile time what implementation of Authorizer<T> I need, so I've got a little bit of reflection to get the fully qualified type). To accomplish this, I use the non-generic overload of UnityContainer's Resolve method to look up the correct authorizer from configuration. Finally, I want to call Authorize on the authorizer, passing through the object I've gotten back from NHibernate. Now, for the problem: In Beta2 of VS2010 the above code works perfectly. On RC and RTM, as soon as I make the Authorize() call, I get a RuntimeBinderException saying "The best overloaded method match for 'Foo.Authorizer<Bar>.Authorize(Bar)' has some invalid arguments". When I inspect the authorizer in the debugger, it's the correct type. When I call GetType().GetMethods() on it, I can see the Authorize method which takes a Bar. If I do GetType() on dbItem it is a Bar. Because this worked in Beta2 and not in RC, I assumed it was a regression (it seems like it should work) and I delayed sorting it out until after I'd had a chance to test it on the RTM version of C# 4.0. Now I've done that and the problem still persists. Does anybody have any suggestions to make this work? Thanks Terence

    Read the article

  • JQuery Validation Plugin: Use Custom Ajax Method

    - by namtax
    Hi Looking for some assistance with the Jquery form validation plugin if possible. I am validating the email field of my form on blur by making an ajax call to my database, which checks if the text in the email field is currently in the database. // Check email validity on Blur $('#sEmail').blur(function(){ // Grab Email From Form var itemValue = $('#sEmail').val(); // Serialise data for ajax processing var emailData = { sEmail: itemValue } // Do Ajax Call $.getJSON('http://localhost:8501/ems/trunk/www/cfcs/admin_user_service.cfc?method=getAdminUserEmail&returnFormat=json&queryformat=column', emailData, function(data){ if (data != false) { var errorMessage = 'This email address has already been registered'; } else { var errorMessage = 'Good' } }) }); What I would like to do, is encorporate this call into the rules of my JQuery Validation Plugin...e.g $("#setAdminUser").validate({ rules:{ sEmail: { required: function(){ // Replicate my on blur ajax email call here } }, messages:{ sEmail: { required: "this email already exists" } }); Wondering if there is anyway of achieving this? Many thanks

    Read the article

  • VBScript & Access MDB - 800A0E7A - "Provider cannot be found. It may not be properly installed"

    - by Perma
    Hey gang, I've having a problem with a VBScript connecting to an access MDB Database. My platform is Vista64, but the majority of resources out there are for ASP/IIS7. Quite simply, I can't get it to connect. I'm getting the following error: 800A0E7A - "Provider cannot be found. It may not be properly installed" My code is: Set conn = CreateObject("ADODB.Connection") strConnect = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\database.MDB" conn.Open strConnect So far I have ran %WINDIR%\System32\odbcad32.exe to try to configure the Driver in 32bit mode, but it hasn't done the trick. Any suggestions would be greatly appreciated

    Read the article

  • Sha or Md5 algorithm i need to encrypt and decrypt in flex

    - by praveen
    Hi I am developing my application in flex and JSP, so when I am passing values through HTTP Service Post method with request object but these values are tracing and modifying by testing team so I am planning to encrypt values in flex and decrypt it in jsp.so is there any algorithms like SHA or MD5 more secure algorithms, so please send any code or related links it is very useful to me. I am using like httpService = new HTTPService; httpService.request = new Object; httpService.request.task = "doInvite"; httpService.request.email = emailInput.text; httpService.request.firstName = firstNameInput.text; httpService.request.lastName = lastNameInput.text; httpService.send(); So is there any other way to give more secure ,please help me in this,Thanks in Advance.

    Read the article

  • JPA Entity (in multiple persistence-unit) in OSGi (Spring DM) Environnement is confusing me.

    - by Vincent Demeester
    Hi, I'm a bit confused about a strange behavior of my JPA's related objects. I have three bundle : The User bundle does contain some user-related objects, but mainly the User object. The Energy bundle does contain some energy-related objects, and particularly a ConsumptionTerminal which contains a List of User. The Index bundle does contain an Index object that has no dependency at all. My OSGi environment is the following : A DataSource bundle that provide 2 services : dataSource and jpaVendorAdapter. The three bundles. They consume dataSource and jpaVendorAdapter. Their module-context.xml file look like : And they all have a persistence.xml file : User <?xml version="1.0" encoding="UTF-8"?> <persistence> <persistence-unit name="securityPU" transaction-type="JTA"> <jta-data-source>java:/securityDataSourceService</jta-data-source> <class>net.nextep.amundsen.security.domain.User</class> <!-- [...] --> <exclude-unlisted-classes>true</exclude-unlisted-classes> <properties> <property name="eclipselink.logging.level" value="INFO" /> <property name="eclipselink.ddl-generation" value="create-tables" /> <property name="eclipselink.ddl-generation.output-mode" value="database" /> <property name="eclipselink.orm.throw.exceptions" value="true" /> </properties> </persistence-unit> </persistence> Energy <?xml version="1.0" encoding="UTF-8"?> <persistence> <persistence-unit name="energyPU" transaction-type="JTA"> <jta-data-source>java:/securityDataSourceService</jta-data-source> <class>net.nextep.amundsen.security.domain.User</class> <class>net.nextep.amundsen.energy.domain.User</class> <!-- [...] --> <exclude-unlisted-classes>true</exclude-unlisted-classes> <properties> <property name="eclipselink.logging.level" value="INFO" /> <property name="eclipselink.ddl-generation" value="create-tables" /> <property name="eclipselink.ddl-generation.output-mode" value="database" /> <property name="eclipselink.orm.throw.exceptions" value="true" /> </properties> </persistence-unit> </persistence> Index : This one has the most simple persistence.xml with just the Index class (no shared Class). I'm using named @PersistenceUnit annotation like @PersitenceUnit(name = 'securityPU') (for the User bundle). And finally, I'm using EclipseLink as Jpa provider and Spring DM (+ Spring DM Server in the development process) The problem is the following : When the User bundle is deployed, I'm able to persist User objects. When the User bundle and Energy bundles are both deployed, I'm not able to persist User objects (neither the Energy object). But I don't have any exception at all ! There is no problem at all with the Index bundle. The bug is dataSource independent (I tried with PostgreSQL and MySQL so far). My first conclusion was that the <class>net.nextep.amundsen.security.domain.User</class> in both persistence unit was causing the trouble. I tried without it (and hiding the User dependent object in the Energy bundle) but it failed too. I'm a bit confused about that bug. I'm also not quite sure about the transaction management in this context. I wasn't the one who designed this architecture (but I tell my intern OK without testing it.. shame on me) but if I could understand this bug and maybe fix it without rewrite the bundle (and break my intern work), I would appreciate. Am I doing something wrong ? (it's obvious, but what..) Did I miss something while reading documentation ? By the way, I'm also looking for some best practices or advices when it comes to JPA, EclipseLink (or whatever JPA Provider) and Spring DM (and OSGi in general). I found interesting slides from Mike Keith about this topic (by browsing Stackoverflow).

    Read the article

  • Design for Vacation Tracking System

    - by Aaronaught
    I have been tasked with developing a system for tracking our company's paid time-off (vacation, sick days, etc.) At the moment we are using an Excel spreadsheet on a shared network drive, and it works pretty well, but we are concerned that we won't be able to "trust" employees forever and sometimes we run into locking issues when two people try to open the spreadsheet at once. So we are trying to build something a little more robust. I would like some input on this design in terms of maintainability, scalability, extensibility, etc. It's a pretty simple workflow we need to represent right now: I started with a basic MS Access schema like this: Employees (EmpID int, EmpName varchar(50), AllowedDays int) Vacations (VacationID int, EmpID int, BeginDate datetime, EndDate datetime) But we don't want to spend a lot of time building a schema and database like this and have to change it later, so I think I am going to go with something that will be easier to expand through configuration. Right now the vacation table has this schema: Vacations (VacationID int, PropName varchar(50), PropValue varchar(50)) And the table will be populated with data like this: VacationID | PropName | PropValue -----------+--------------+------------------ 1 | EmpID | 4 1 | EmpName | James Jones 1 | Reason | Vacation 1 | BeginDate | 2/24/2010 1 | EndDate | 2/30/2010 1 | Destination | Spectate Swamp 2 | ... | ... I think this is a pretty good, extensible design, we can easily add new properties to the vacation like the destination or maybe approval status, etc. I wasn't too sure how to go about managing the database of valid properties, I thought of putting them in a separate PropNames table but it gets complicated to manage all the different data types and people say that you shouldn't put CLR type names into a SQL database, so I decided to use XML instead, here is the schema: <VacationProperties> <PropertyNames>EmpID,EmpName,Reason,BeginDate,EndDate,Destination</PropertyNames> <PropertyTypes>System.Int32,System.String,System.String,System.DateTime,System.DateTime,System.String</PropertyTypes> <PropertiesRequired>true,true,false,true,true,false</PropertiesRequired> </VacationProperties> I might need more fields than that, I'm not completely sure. I'm parsing the XML like this (would like some feedback on the parsing code): string xml = File.ReadAllText("properties.xml"); Match m = Regex.Match(xml, "<(PropertyNames)>(.*?)</PropertyNames>"; string[] pn = m.Value.Split(','); // do the same for PropertyTypes, PropertiesRequired Then I use the following code to persist configuration changes to the database: string sql = "DROP TABLE VacationProperties"; sql = sql + " CREATE TABLE VacationProperties "; sql = sql + "(PropertyName varchar(100), PropertyType varchar(100) "; sql = sql + "IsRequired varchar(100))"; for (int i = 0; i < pn.Length; i++) { sql = sql + " INSERT VacationProperties VALUES (" + pn[i] + "," + pt[i] + "," + pv[i] + ")"; } // GlobalConnection is a singleton new SqlCommand(sql, GlobalConnection.Instance).ExecuteReader(); So far so good, but after a few days of this I then realized that a lot of this was just a more specific kind of a generic workflow which could be further abstracted, and instead of writing all of this boilerplate plumbing code I could just come up with a workflow and plug it into a workflow engine like Windows Workflow Foundation and have the users configure it: In order to support routing these configurations throw the workflow system, it seemed natural to implement generic XML Web Services for this instead of just using an XML file as above. I've used this code to implement the Web Services: public class VacationConfigurationService : WebService { [WebMethod] public void UpdateConfiguration(string xml) { // Above code goes here } } Which was pretty easy, although I'm still working on a way to validate that XML against some kind of schema as there's no error-checking yet. I also created a few different services for other operations like VacationSubmissionService, VacationReportService, VacationDataService, VacationAuthenticationService, etc. The whole Service Oriented Architecture looks like this: And because the workflow itself might change, I have been working on a way to integrate the WF workflow system with MS Visio, which everybody at the office already knows how to use so they could make changes pretty easily. We have a diagram that looks like the following (it's kind of hard to read but the main items are Activities, Authenticators, Validators, Transformers, Processors, and Data Connections, they're all analogous to the services in the SOA diagram above). The requirements for this system are: (Note - I don't control these, they were given to me by management) Main workflow must interface with Excel spreadsheet, probably through VBA macros (to ease the transition to the new system) Alerts should integrate with MS Outlook, Lotus Notes, and SMS (text messages). We also want to interface it with the company Voice Mail system but that is not a "hard" requirement. Performance requirements: Must handle 250,000 Transactions Per Second Should be able to handle up to 20,000 employees (right now we have 3) 99.99% uptime ("four nines") expected Must be secure against outside hacking, but users cannot be required to enter a username/password. Platforms: Must support Windows XP/Vista/7, Linux, iPhone, Blackberry, DOS 2.0, VAX, IRIX, PDP-11, Apple IIc. Time to complete: 6 to 8 weeks. My questions are: Is this a good design for the system so far? Am I using all of the recommended best practices for these technologies? How do I integrate the Visio diagram above with the Windows Workflow Foundation to call the ConfigurationService and persist workflow changes? Am I missing any important components? Will this be extensible enough to support any scenario via end-user configuration? Will the system scale to the above performance requirements? Will we need any expensive hardware to run it? Are there any "gotchas" I should know about with respect to cross-platform compatibility? For example would it be difficult to convert this to an iPhone app? How long would you expect this to take? (We've dedicated 1 week for testing so I'm thinking maybe 5 weeks?)

    Read the article

  • Auto increment column i JDO, GAE

    - by Viktor
    Hi, I have a data class with some fields, one is a URL that I consider the PK, if I add a new item (do a new sync) and save it it should overwrite the item in the database if it's the same URL. But I also need a "normal" Long id that is incremented for every object in the database and for this one I always get null unless I tags it as a PK, how can a get this incrementation but not have the column as my PK? @Persistent(valueStrategy=IdGeneratorStrategy.IDENTITY) private Long _id; @Persistent private String _title; @PrimaryKey @Persistent private String _url; /Viktor

    Read the article

< Previous Page | 963 964 965 966 967 968 969 970 971 972 973 974  | Next Page >