Search Results

Search found 10042 results on 402 pages for 'orient db'.

Page 357/402 | < Previous Page | 353 354 355 356 357 358 359 360 361 362 363 364  | Next Page >

  • Distributed Cache with Serialized File as DataStore in Oracle Coherence

    - by user226295
    Weired but I am investigating the Oracle Coherence as a substitue for distribute cache. My primarr problem is that we dont have distribituted cache as such as of now in our app. Thats my major concern. And thats what I want to implement. So, lets say if I take up a machine and start a new (3rd) reading process, it will be able to connect to the cache and listen to the cache and will have a full set of cache triplicated (as of now its duplicated) Now thats waste from a common person stanpoint too. The size of the cache is 2 GB and without going distibuted its limiting us. Thats bring me to Coheremce. But now, we dont have database as persistent store too. we have the archival processes as our persistent store. (90 days worth of data) Ok now multiply that with soem where around 2 GB * 90 (thats the bare minimum we want to keep). Preliminary/Intermediate analysis of Coherence as a solution. And a (supposedly) brilliant thought crossed my mind. Why not have this as persistant storage with my distributed cache. Does Oracle Coherence support that. I will get rid of archiving infrastructure too (i hate daemon archiving processes). For some starnge reasons, I dont wanna go to the DB to replace those flat files. What say?, can Coherence be my savior? Any other stable alternate too. (Coherence is imposed on me by big guys, FYI)

    Read the article

  • Using entity framework to connect to multiple similar tables in .net MVC.

    - by Dite
    A relative newcomer to .net MVC2 and the entity framework, I am working on a project which requires a single web application, (C# .net 4), to connect to multiple different databases depending on the route of access, (ie subdomain). No problem with this in principle and all the logic is written to transform the subdomain into an entity connection and pass this through to the Entity Model. The problem comes with the fact that the different database whilst being largely similar in structure contain 3 or 4 unique tables bespoke to that instance. To my mind there are two ways to solve this issue, neither of which i am sure will be possible. 1/ Use a separate entity model for each database. -Attempts down this route have through up conflicts where table/sp names are the same across differnt db's, or implicit conversion errors when I try and put the different models in different namespaces. or 2/ Overwrite the classes which refer to the changeable database objects based on the value of a base controller property. -I have found nothing to suggest i can even do this. My question is if either of theser routes can ever work in principle or if i should just give up on the EF and connect to the dtabases directlky using ADO. Perhaps there is another way to solve this problem i haven't thought of? Thanks for any help...

    Read the article

  • Performing a SVD on tweets. Memory problem

    - by plotti
    I have generated a huge csv file as an output from my pos tagging and stemming. It looks like this: word1, word2, word3, ..., word14400 person1 1 2 0 1 person2 0 0 1 0 ... person650 It contains the word counts for each person. Like this I am getting characteristic vectors for each person. I want to run a SVD on this beast, but it seems the matrix is too big to be held in memory to perform the operation. My quesion is: should i reduce the column size by removing words which have a column sum of for example 1, which means that they have been used only once. Do I bias the data too much with this attempt? I tried the rapidminer attempt, by loading the csv into the db. and then sequentially reading it in with batches for processing, like rapidminer proposes. But Mysql can't store that many columns in a table. If i transpose the data, and then retranspose it on import it also takes ages.... -- So in general I am asking for advice how to perform a svd on such a corpus.

    Read the article

  • Firing through HTTP a Perl script for sending signals to daemons

    - by Eric Fortis
    Hello guys, I'm using apache2 on Ubuntu. I have a Perl script which basically read the files names of a directory, then rewrites a text file, then sends a signal to a daemon. How can this be done, as secure as possible through a web-page? Actually I can run the code below, but not if I remove the comments. I'm looking for advise considering: Using HTTP Requests? How about Apache file permissions on the directory shown in code? Is htaccess enough to enable user/pass access to the cgi? Should I use a database instead of writing to a file and run a cron querying the db with permission granted to write and send the signal? Granting as less permissions as possible to the webserver. Should I set a VPN? #!/usr/bin/perl -wT use strict; use CGI; #@fileList = </home/user/*>; #read a directory listing my $query = CGI->new(); print $query->header( "text/html" ), $query->p( "FirstFileNameInArray" ), #$query->p( $fileList[0] ), #output the first file in directory $query->end_html;

    Read the article

  • database connection OK,result not appear

    - by klox
    hi..all.for now i'm already connected to database but the result not appear at Tuner range is"+res+"this is my code: var str=data[0]; var matches=str.match(/[EE|EJU].*D/i); $.ajax({ type:"post", url:"process1.php", data:"tversion="+matches+"&action=tunermatches", cache:false, async:false, success: function(res){ $('#value').replacewith("<div id='value'><h6>Tuner range is"+res+".</h6></div>"); } }); }); and this is my process file: //connect to database $dbc=mysql_connect(_SRV,_ACCID,_PWD) or die(_ERROR15.": ".mysql_error()); $db=mysql_select_db("qdbase",$dbc) or die(_ERROR17.": ".mysql_error()); switch(postVar('action')) { case 'tunermatches' : tunermatches(postVar('tversion')); break; function tunermatches($tversion)){ $Tuner=mysql_real_escape_string($tversion); $sql= "SELECT remark FROM settingdata WHERE itemname='Tuner_range' AND itemdata='".$Tunermatches."'"; $res=mysql_query($sql) or die (_ERROR26.":".mysql_error()); $dat=mysql_fetch_array($res,MYSQL_NUM); if($dat[0]>0) { echo $dat[0]; } mysql_close($dbc); }

    Read the article

  • What is the best / proper idiom in django for modifying a field during a .save() where you need to o

    - by MDBGuy
    Hi, say I've got: class LogModel(models.Model): message = models.CharField(max_length=512) class Assignment(models.Model): someperson = models.ForeignKey(SomeOtherModel) def save(self, *args, **kwargs): super(Assignment, self).save() old_person = #????? LogModel(message="%s is no longer assigned to %s"%(old_person, self).save() LogModel(message="%s is now assigned to %s"%(self.someperson, self).save() My goal is to save to LogModel some messages about who Assignment was assigned to. Notice that I need to know the old, presave value of this field. I have seen code that suggests, before super().save(), retrieve the instance from the database via primary key and grab the old value from there. This could work, but is a bit messy. In addition, I plan to eventually split this code out of the .save() method via signals - namely pre_save() and post_save(). Trying to use the above logic (Retrieve from the db in pre_save, make the log entry in post_save) seemingly fails here, as pre_save and post_save are two seperate methods. Perhaps in pre_save I can retrieve the old value and stick it on the model as an attribute? I was wondering if there was a common idiom for this. Thanks.

    Read the article

  • How to make write operation idempotent?

    - by Morgan Cheng
    I'm reading article about recently release Gizzard sharding framework by twitter(http://engineering.twitter.com/2010/04/introducing-gizzard-framework-for.html). It mentions that all write operations must be idempotent to make sure high reliability. According to wikipedia, "Idempotent operations are operations that can be applied multiple times without changing the result." But, IMHO, in Gazzard case, idempotent write operation should be operations that sequence doesn't matter. Now, my question is: How to make write operation idempotent? The only thing I can image is to have a version number attached to each write. For example, in blog system. Each blog must have a $blog_id and $content. In application level, we always write a blog content like this write($blog_id, $content, $version). The $version is determined to be unique in application level. So, if application first try to set one blog to "Hello world" and second want it to be "Goodbye", the write is idempotent. We have such two write operations: write($blog_id, "Hello world", 1); write($blog_id, "Goodbye", 2); These two operations are supposed to changed two different records in DB. So, no matter how many times and what sequence these two operations executed, the results are same. This is just my understanding. Please correct me if I'm wrong.

    Read the article

  • EditText items in a scrolling list lose their changes when scrolled off the screen

    - by ianww
    I have a long scrolling list of EditText items created by a SimpleCursorAdapter and prepopulated with values from an SQLite database. I make this by: cursor = db.rawQuery("SELECT _id, criterion, localweight, globalweight FROM " + dbTableName + " ORDER BY criterion", null); startManagingCursor(cursor); mAdapter = new SimpleCursorAdapter(this, R.layout.weight_edit_items, cursor, new String[]{"criterion","localweight","globalweight"}, new int[]{R.id.criterion_edit, R.id.localweight_edit, R.id.globalweight_edit}); this.setListAdapter(mAdapter); The scrolling list is several emulator screens long. The items display OK - scrolling through them shows that each has the correct value from the database. I can make an edit change to any of the EditTexts and the new text is accepted and displayed in the box. But...if I then scroll the list far enough to take the edited item off the screen, when I scroll back to look at it again its value has returned to what it was before I made the changes, ie. my edits have been lost. In trying to sort this out, I've done a getText to look at what's in the EditText after I've done my edits (and before a scroll) and getText returns the original text, even though the EditText is displaying my new text. It seems that the EditText has only accepted my edits superficially and they haven't been bound to the EditText, meaning they get dropped when scrolled off the screen. Can anyone please tell me what's going on here and what I need to do to force the EditText to retain its edits? Thanks Ian

    Read the article

  • Ajax Request using jQuery in Rails

    - by Steve
    Hi... I am sending an Ajax Request using jQuery. What happens is that I am getting an "405 Method Not Allowed" Error. I am just posting a form, which would get the detail from the form and insert it into the DB. Just the usual stuff.I am using WEBrick that comes as default with the rails package. Can somebody please tell me how to fix this. This is the code that triggers the Ajax Request $.post($(this).attr("action") + ".js",$(this).serialize(),null,"script"); Response Headers Cache-Control no-cache Allow GET, PUT, DELETE Content-Type text/html; charset=utf-8 Content-Length 9502 Server WEBrick/1.3.1 (Ruby/1.9.1/2009-12-07) Date Wed, 02 Jun 2010 20:41:33 GMT Connection Keep-Alive Request Headers Host localhost:3000 User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept application/json, text/javascript, */* Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Content-Type application/x-www-form-urlencoded; charset=UTF-8 X-Requested-With XMLHttpRequest Referer http://localhost:3000/viewspot/3 Content-Length 141 Pragma no-cache Cache-Control no-cache

    Read the article

  • protect form hijacking hack

    - by Karem
    Yes hello today I discovered a hack for my site. When you write a msg on a users wall (in my communitysite) it runs a ajax call, to insert the msg to the db and will then on success slide down and show it. Works fine with no problem. So I was rethinking alittle, I am using POST methods for this and if it was GET method you could easily do ?msg=haxmsg&usr=12345679. But what could you do to come around the POST method? I made a new html document, made a form and on action i set "site.com/insertwall.php" (the file that normally are being used in ajax), i made some input fields with names exactly like i am doing with the ajaxcall (msg, uID (userid), BuID (by userid) ) and made a submit button. I know I have a page_protect() function on which requires you to login and if you arent you will be header to index.php. So i logged in (started session on my site.com) and then I pressed on this submit button. And then wops I saw on my site that it has made a new message. I was like wow, was it so easy to hijack POST method i thought maybe it was little more secure or something. I would like to know what could I do to prevent this hijacking? As i wouldnt even want to know what real hackers could do with this "hole". The page_protect secures that the sessions are from the same http user agent and so, and this works fine (tried to run the form without logging in, and it just headers me to startpage) but yea wouldnt take long time to figure out to log in first and then run it. Any advices are appreciated alot. I would like to keep my ajax calls most secure as possible and all of them are running on the POST method. What could I do to the insertwall.php, to check that it comes from the server or something.. Thank you

    Read the article

  • what's the performance difference between int and varchar for primary keys

    - by user568576
    I need to create a primary key scheme for a system that will need peer to peer replication. So I'm planning to combine a unique system ID and a sequential number in some way to come up with unique ID's. I want to make sure I'll never run out of ID's, so I'm thinking about using a varchar field, since I could always add another character if I start running out. But I've read that integers are better optimized for this. So I have some questions... 1) Are integers really better optimized? And if they are, how much of a performance difference is there between varchars and integers? I'm going to use firebird for now. But I may switch later. Or possibly support multiple db's. So I'm looking for generalizations, if that's possible. 2) If integers are significantly better optimized, why is that? And is it likely that varchars will catch up in the future, so eventually it won't matter anyway? My varchar keys won't have any meaning, except for the unique system ID part. But I may want to obscure that somehow. Also, I plan to efficiently use all the bits of each character. I don't, for example, plan to code the integer 123 as the character string "123". So I don't think varchars will require more space than integers.

    Read the article

  • What is the difference between .get() and .fetch(1)

    - by AutomatedTester
    I have written an app and part of it is uses a URL parser to get certain data in a ReST type manner. So if you put /foo/bar as the path it will find all the bar items and if you put /foo it will return all items below foo So my app has a query like data = Paths.all().filter('path =', self.request.path).get() Which works brilliantly. Now I want to send this to the UI using templates {% for datum in data %} <div class="content"> <h2>{{ datum.title }}</h2> {{ datum.content }} </div> {% endfor %} When I do this I get data is not iterable error. So I updated the Django to {% for datum in data.all %} which now appears to pull more data than I was giving it somehow. It shows all data in the datastore which is not ideal. So I removed the .all from the Django and changed the datastore query to data = Paths.all().filter('path =', self.request.path).fetch(1) which now works as I intended. In the documentation it says The db.get() function fetches an entity from the datastore for a Key (or list of Keys). So my question is why can I iterate over a query when it returns with fetch() but can't with get(). Where has my understanding gone wrong?

    Read the article

  • Social Media Java Design Problem

    - by jboyd
    I need to put something together quickly that will take blog posts and place them on social media sites, the requirements are as follows: Blog Entries are independent records that already exist, they have a published date and a modified date, the blog entry application cannot be changed, at least not substantially A new blog entry, or update needs to be sent to social media sites I currently do not need to update or delete social media communications if the blog entry is edited, or deleted, though I may need to later My design problems here are as follows: how do I know the status of each update how can I figure out what blog entry updates and postings have already been sent out? how can I quickly poll the blog entry table for postings that haven't yet been sent out? Avoiding looking at each Entry record from the DB as an object and asking if it's been sent already. That would be too slow. I cannot hook into any Blog Entry update code, my only option would be to create a trigger that an update queues something to be processed I'm looking for general guiding principles here, the biggest problem I'm having is coming up with any reasonable way to figure out if a blog entry should be sent to our social media sites in the first place

    Read the article

  • starting rails in test environment

    - by Brian D.
    I'm trying to load up rails in the test environment using a ruby script. I've tried googling a bit and found this recommendation: require "../../config/environment" ENV['RAILS_ENV'] = ARGV.first || ENV['RAILS_ENV'] || 'test' This seems to load up my environment alright, but my development database is still being used. Am I doing something wrong? Here is my database.yml file... however I don't think it is the issue development: adapter: mysql encoding: utf8 reconnect: false database: BrianSite_development pool: 5 username: root password: dev host: localhost # Warning: The database defined as "test" will be erased and # re-generated from your development database when you run "rake". # Do not set this db to the same as development or production. test: adapter: mysql encoding: utf8 reconnect: false database: BrianSite_test pool: 5 username: root password: dev host: localhost production: adapter: mysql encoding: utf8 reconnect: false database: BrianSite_production pool: 5 username: root password: dev host: localhost I can't use ruby script/server -e test because I'm trying to run ruby code after I load rails. More specifically what I'm trying to do is: run a .sql database script, load up rails and then run automated tests. Everything seems to be working fine, but for whatever reason rails seems to be loading in the development environment instead of the test environment. Here is a shortened version of the code I am trying to run: system "execute mysql script here" require "../../config/environment" ENV['RAILS_ENV'] = ARGV.first || ENV['RAILS_ENV'] || 'test' describe Blog do it "should be initialized successfully" do blog = Blog.new end end I don't need to start a server, I just need to load my rails code base (models, controllers, etc..) so I can run tests against my code. Thanks for any help.

    Read the article

  • Using Memcached in Python/Django - questions.

    - by Thomas
    I am starting use Memcached to make my website faster. For constant data in my database I use this: from django.core.cache import cache cache_key = 'regions' regions = cache.get(cache_key) if result is None: """Not Found in Cache""" regions = Regions.objects.all() cache.set(cache_key, regions, 2592000) #(2592000sekund = 30 dni) return regions For seldom changed data I use signals: from django.core.cache import cache from django.db.models import signals def nuke_social_network_cache(self, instance, **kwargs): cache_key = 'networks_for_%s' % (self.instance.user_id,) cache.delete(cache_key) signals.post_save.connect(nuke_social_network_cache, sender=SocialNetworkProfile) signals.post_delete.connect(nuke_social_network_cache, sender=SocialNetworkProfile) Is it correct way? I installed django-memcached-0.1.2, which show me: Memcached Server Stats Server Keys Hits Gets Hit_Rate Traffic_In Traffic_Out Usage Uptime 127.0.0.1 15 220 276 79% 83.1 KB 364.1 KB 18.4 KB 22:21:25 Can sombody explain what columns means? And last question. I have templates where I am getting much records from a few table (relationships). So in my view I get records from one table and in templates show it and related info from others. Generating page last a few seconds for very small table (<100records). Is it some easy way to cache queries from templates? Have I to do some big structure in my view (with all related tables), cache it and send to template?

    Read the article

  • Best practice how to store HTML in a database column

    - by tbrandao
    I have an application that modifies a table dynamically, think spreadsheet), then upon saving the form (which the table is part of) ,I store that changed table (with user modifications) in a database column named html_Spreadhseet,along with the rest of the form data. right now I'm just storing the html in a plain text format with basic escaping of characters... I'm aware that this could be stored as a separate file, the source table (html_workseeet) already is. But from a data handling perspective its easier to save the changed html table to and from a column so as to avoid having to come up with a file management strategy (which folder will this live in, now must include folder in backups, security issues now need to apply to files, how to sync db security with file system etc.), so to minimize these issues I'm only storing the ... part in the database column. My question is should I gzip the HTML , maybe use JSON, or some other format to easily store and retrieve the HTML from the database column, what is the best practice to store HTML content in a datbase? Or just store it as I currently am as an escaped text column?

    Read the article

  • GridView to excel after create send mail c#

    - by Diego Bran
    I want to send a .xlsx , first I created (It has html code in it) then I used a SMTP server to send it , it does attach the file but when I tried to open it " It says that the file is corrupted etc" any help? Here is my code try { System.IO.StringWriter sw = new System.IO.StringWriter(); System.Web.UI.HtmlTextWriter htw = new System.Web.UI.HtmlTextWriter(sw); // Render grid view control. gvStock.RenderControl(htw); // Write the rendered content to a file. string renderedGridView = sw.ToString(); File.WriteAllText(@"C:\test\ExportedFile.xls", renderedGridView); // File.WriteAllText(@"C:\test\ExportedFile.xls", p1); } catch (Exception e) { Response.Write(e.Message); } try { MailMessage mail = new MailMessage(); SmtpClient SmtpServer = new SmtpClient("server"); mail.From = new MailAddress("[email protected]"); mail.To.Add("[email protected]"); mail.Subject = "Test Mail - 1"; mail.Body = "mail with attachment"; Attachment data = new Attachment("C:/test/ExportedFile.xls"); mail.Attachments.Add(data); SmtpServer.Port = 25; SmtpServer.Credentials = new System.Net.NetworkCredential("user", "pass"); // SmtpServer.EnableSsl = true; SmtpServer.UseDefaultCredentials = false; SmtpServer.Send(mail); } catch( Exception e) { Response.Write(e.Message); }

    Read the article

  • Program freezing when syncing a ldap database (100+ entries added)

    - by djerry
    Hey guys, I'm updating a ldap database. I need to add a list of users to the db. I've written a simple foreach loop. There are about 180 users i need to add, but at the 128th user, the program freezes. I know ldap is really used for querying (fast), and that adding and modifying entries will not go as smooth as a search query, but is it normal that the program freezes while doing this? I'll post some code just in case. public static void SyncLDAPWithMySql(Novell.Directory.Ldap.LdapConnection _conn) { List<User> users = GetUsers(); int iteller = 0; foreach (User user in users) { if (!UserAlreadyInLdap(user, _conn)) { TelUser teluser = new TelUser(); teluser.Telephone = user.E164; teluser.Uid = user.E164; teluser.Company = "/"; teluser.Dn = ""; teluser.Name = "/"; teluser.DisplayName = "/"; teluser.FirstName = "/"; TelephoneDA.InsertUser(_conn, teluser ); } Console.WriteLine(iteller + " : " + user.E164); iteller++; } } private static bool UserAlreadyInLdap(User user, Novell.Directory.Ldap.LdapConnection _conn) { List<TelUser> users = TelephoneDA.GetAllEntries(_conn); foreach (TelUser teluser in users) { if (teluser.Telephone.Equals(user.E164)) return true; } return false; } public static int InsertUser(LdapConnection conn, TelUser user) { int iResponse = IsTelNumberUnique(conn, user.Dn, user.Telephone); if (iResponse == 0) { LdapAttributeSet attrSet = MakeAttSet(user); string dnForPhonebook = configurationManager.AppSettings.Get("phonebookDn"); LdapEntry ent = new LdapEntry("uid=" + user.Uid + "," + dnforPhonebook, attrSet); try { conn.Add(ent); } catch (Exception ex) { Console.WriteLine(ex.Message); } } return iResponse; } Am i adding too many entries at a time??? Thanks in advance.

    Read the article

  • Spring Security ACL: NotFoundException from JDBCMutableAclService.createAcl

    - by user340202
    Hello, I've been working on this task for too long to abandon the idea of using Spring Security to achieve it, but I wish that the community will provide with some support that will help reduce the regret that I have for choosing Spring Security. Enough ranting and now let's get to the point. I'm trying to create an ACL by using JDBCMutableAclService.createAcl as follows: [code] public void addPermission(IWFArtifact securedObject, Sid recipient, Permission permission, Class clazz) { ObjectIdentity oid = new ObjectIdentityImpl(clazz.getCanonicalName(), securedObject.getId()); this.addPermission(oid, recipient, permission); } @Override @Transactional(propagation = Propagation.REQUIRED, isolation = Isolation.READ_UNCOMMITTED, readOnly = false) public void addPermission(ObjectIdentity oid, Sid recipient, Permission permission) { SpringSecurityUtils.assureThreadLocalAuthSet(); MutableAcl acl; try { acl = this.mutableAclService.createAcl(oid); } catch (AlreadyExistsException e) { acl = (MutableAcl) this.mutableAclService.readAclById(oid); } // try { // acl = (MutableAcl) this.mutableAclService.readAclById(oid); // } catch (NotFoundException nfe) { // acl = this.mutableAclService.createAcl(oid); // } acl.insertAce(acl.getEntries().length, permission, recipient, true); this.mutableAclService.updateAcl(acl); } [/code] The call throws a NotFoundException from the line: [code] // Retrieve the ACL via superclass (ensures cache registration, proper retrieval etc) Acl acl = readAclById(objectIdentity); [/code] I believe this is caused by something related to Transactional, and that's why I have tested with many TransactionDefinition attributes. I have also doubted the annotation and tried with declarative transaction definition, but still with no luck. One important point is that I have used the statement used to insert the oid in the database earlier in the method directly on the database and it worked, and also threw a unique constraint exception at me when it tried to insert it in the method. I'm using Spring Security 2.0.8 and IceFaces 1.8 (which doesn't support spring 3.0 but definetely supprorts 2.0.x, specially when I keep caling SpringSecurityUtils.assureThreadLocalAuthSet()). My AppServer is Tomcat 6.0, and my DB Server is MySQL 6.0 I wish to get back a reply soon because I need to get this task off my way

    Read the article

  • Modules vs. Classes and their influence on descendants of ActiveRecord::Base

    - by Chris
    Here's a Ruby OO head scratcher for ya, brought about by this Rails scenario: class Product < ActiveRecord::Base has_many(:prices) # define private helper methods end module PrintProduct attr_accessor(:isbn) # override methods in ActiveRecord::Base end class Book < Product include PrintProduct end Product is the base class of all products. Books are kept in the products table via STI. The PrintProduct module brings some common behavior and state to descendants of Product. Book is used inside fields_for blocks in views. This works for me, but I found some odd behavior: After form submission, inside my controller, if I call a method on a book that is defined in PrintProduct, and that method calls a helper method defined in Product, which in turn calls the prices method defined by has_many, I'll get an error complaining that Book#prices is not found. Why is that? Book is a direct descendant of Product! More interesting is the following.. As I developed this hierarchy PrintProduct started to become more of an abstract ActiveRecord::Base, so I thought it prudent to redefine everything as such: class Product < ActiveRecord::Base end class PrintProduct < Product end class Book < PrintProduct end All method definitions, etc. are the same. In this case, however, my web form won't load because the attributes defined by attr_accessor (which are "virtual attributes" referenced by the form but not persisted in the DB) aren't found. I'll get an error saying that there is no method Book#isbn. Why is that?? I can't see a reason why the attr_accessor attributes are not found inside my form's fields_for block when PrintProduct is a class, but they are found when PrintProduct is a Module. Any insight would be appreciated. I'm dying to know why these errors are occurring!

    Read the article

  • Oracle User definied aggregate function for varray of varchar

    - by baju
    I am trying to write some aggregate function for the varray and I get this error code when I'm trying to use it with data from the DB: ORA-00600 internal error code, arguments: [kodpunp1], [], [], [], [], [], [], [], [], [], [], [] [koxsihread1], [0], [3989], [45778], [], [], [], [], [], [], [], [] Code of the function is really simple(in fact it does nothing ): create or replace TYPE "TEST_VECTOR" as varray(10) of varchar(20) ALTER TYPE "TEST_VECTOR" MODIFY LIMIT 4000 CASCADE create or replace type Test as object( lastVector TEST_VECTOR, STATIC FUNCTION ODCIAggregateInitialize(sctx in out Test) return number, MEMBER FUNCTION ODCIAggregateIterate(self in out Test, value in TEST_VECTOR) return number, MEMBER FUNCTION ODCIAggregateMerge(self IN OUT Test, ctx2 IN Test) return number, MEMBER FUNCTION ODCIAggregateTerminate(self IN Test, returnValue OUT TEST_VECTOR, flags IN number) return number ); create or replace type body Test is STATIC FUNCTION ODCIAggregateInitialize(sctx in out Test) return number is begin sctx := Test(TEST_VECTOR()); return ODCIConst.Success; end; MEMBER FUNCTION ODCIAggregateIterate(self in out Test, value in TEST_VECTOR) return number is begin self.lastVector := value; return ODCIConst.Success; end; MEMBER FUNCTION ODCIAggregateMerge(self IN OUT Test, ctx2 IN Test) return number is begin return ODCIConst.Success; end; MEMBER FUNCTION ODCIAggregateTerminate(self IN Test, returnValue OUT TEST_VECTOR, flags IN number) return number is begin returnValue := self.lastVector; return ODCIConst.Success; end; end; create or replace FUNCTION test_fn (input TEST_VECTOR) RETURN TEST_VECTOR PARALLEL_ENABLE AGGREGATE USING Test; Next I create some test data: create table t1_test_table( t1_id number not null, t1_value TEST_VECTOR not null, Constraint PRIMARY_KEY_1 PRIMARY KEY (t1_id) ) Next step is to put some data to the table insert into t1_test_table (t1_id,t1_value) values (1,TEST_VECTOR('x','y','z')) Now everything is prepared to perform queries: Select test_fn(TEST_VECTOR('y','x')) from dual Query above work well Select test_fn(t1_value) from t1_test_table where t1_id = 1 Version of Oracle DBMS I use: 11.2.0.3.0 Does anyone tried do such a thing? What can be the reason that it does not work? How to solve it? Thanks in advance for help.

    Read the article

  • C# & SQL Server Authentication

    - by Peter
    Hello, I'm currently developing a C# app with an SQL Server DB back-end. I'm approaching the point of deployment and hitting a problem. The applicaiton will be deployed within an active directory network. As far as SQL authentication goes, I understand that I have 2 options - Windows Authenticaiton or Server Authenticaiton. If I use Server Authentication, I'm concerned that the username and password for the account will be stored in plain text in the app.config file, and therefore leave the database vulnerable. Using Windows Authenticaiton will avoid this issue, however it would mean giving every member of staff within our organisation read/write access to the database in order to run the app correctly. Whilst this is ok, it also means that they can easily connect to the database themselves via other means and directly alter the data outside of the app. I'm guessing there is someting really obvious I'm missing here, but I've been googling all evening to no avail. Any advice/guidance would be much appreciated! Peter Addition - my project is Windows Form based not ASP.NET - is encrypting the app.config file still the right answer? If it is, does anyone have any examples that are not ASP.NET based?

    Read the article

  • SQL query for an access database needed

    - by masfenix
    Hey guys, first off all sorry, i can't login using my yahoo provider. anyways I have this problem. Let me explain it to you, and then I'll show you a picture. I have a access db table. It has 'report id', 'recpient id', and 'recipient name' and 'report req'. What the table "means" is that do the user using that report still require it or can we decommission it. Here is how the data looks like (blocked out company userids and usernames): *check the link below, I cant post pictures cuz yahoo open id provider isnt working. So basically I need to have 3 select queries: 1) Select all the reports where for each report, ALL the users have said no to 'reportreq'. In plain English, i want a listing of all the reports that we have to decommission because no user wants it. 2) Select all the reports where the report is required, and the batchprintcopy is more then 0. This way we can see which report needs to be printed and save paper instead of printing all the reports. 3)A listing of all the reports where the reportreq field is empty. I think i can figure this one out myself. This is using Access/VBA and the data will be exported to an excel spreadsheet. I just a simple query if it exists, OR an alogorithm to do it quickly. I just tried making a "matrix" and it took about 2 hours to populate. https://docs.google.com/uc?id=0B2EMqbpeBpQkMTIyMzA5ZjMtMGQ3Zi00NzRmLWEyMDAtODcxYWM0ZTFmMDFk&hl=en_US

    Read the article

  • PHP class_exists always returns true

    - by Ali
    I have a PHP class that needs some pre-defined globals before the file is included: File: includes/Product.inc.php if (class_exists('Product')) { return; } // This class requires some predefined globals if ( !isset($gLogger) || !isset($db) || !isset($glob) ) { return; } class Product { ... } The above is included in other PHP files that need to use Product using require_once. Anyone who wants to use Product must however ensure those globals are available, at least that's the idea. I recently debugged an issue in a function within the Product class which was caused because $gLogger was null. The code requiring the above Product.inc.php had not bothered to create the $gLogger. So The question is how was this class ever included if $gLogger was null? I tried to debug the code (xdebug in NetBeans), put a breakpoint at the start of Product.inc.php to find out and every time it came to the if (class_exists('Product')) clause it would simply step in and return thus never getting to the global checks. So how was it ever included the first time? This is PHP 5.1+ running under MAMP (Apache/MySQL). I don't have any auto loaders defined.

    Read the article

  • ASP.Net MVC - how can I easily serialize query results to a database?

    - by Mortanis
    I've been working on a little property search engine while I learn ASP.Net MVC. I've gotten the results from various property database tables and sorted them into a master generic property response. The search form is passed via Model Binding and works great. Now, I'd like to add pagination. I'm returning the chunk of properties for the current page with .Skip() and .Take(), and that's working great. I have a SearchResults model that has the paged result set and various other data like nextPage and prevPage. Except, I no longer have the original form of course to pass to /Results/2. Previously I'd have just hidden a copy of the form and done a POST each time, but it seems inelegant. I'd like to serialize the results to my MS SQL database and return a unique key for that results set - this also helps with a "Send this query to a friend!" link. Killing two birds with one stone. Is there an easy way to take an IQueryable result set that I have, serialize it, stick it into the DB, return a unique key and then reverse the process with said key? I'm using Linq to SQL currently on a MS SQL Express install, though in production it'll be on MS SQL 2008.

    Read the article

< Previous Page | 353 354 355 356 357 358 359 360 361 362 363 364  | Next Page >