Search Results

Search found 10077 results on 404 pages for 'techie db'.

Page 346/404 | < Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >

  • SSIS - Bulk Update at Database Field Level

    - by Adam
    Hello, Here's our mission: Receive files from clients. Each file contains anywhere from 1 to 1,000,000 records. Records are loaded to a staging area and business-rule validation is applied. Valid records are then pumped into an OLTP database in a batch fashion, with the following rules: If record does not exist (we have a key, so this isn't an issue), create it. If record exists, optionally update each database field. The decision is made based on one of 3 factors...I don't believe it's important what those factors are. Our main problem is finding an efficient method of optionally updating the data at a field level. This is applicable across ~12 different database tables, with anywhere from 10 to 150 fields in each table (original DB design leaves much to be desired, but it is what it is). Our first attempt has been to introduce a table that mirrors the staging environment (1 field in staging for each system field) and contains a masking flag. The value of the masking flag represents the 3 factors. We've then put an UPDATE similar to... UPDATE OLTPTable1 SET Field1 = CASE WHEN Mask.Field1 = 0 THEN Staging.Field1 WHEN Mask.Field1 = 1 THEN COALESCE( Staging.Field1 , OLTPTable1.Field1 ) WHEN Mask.Field1 = 2 THEN COALESCE( OLTPTable1.Field1 , Staging.Field1 ) ... As you can imagine, the performance is rather horrendous. Has anyone tackled a similar requirement? We're a MS shop using a Windows Service to launch SSIS packages that handle the data processing. Unfortunately, we're pretty much novices at this stuff.

    Read the article

  • How can I implement a volume meter for a song currently playing? (iPhone OS 3.1.3)

    - by Adam
    Hi i'm very new to core audio and I just would like some help in coding up a little volume meter for whatever's being outputted through headphones or built-in speaker. Like a dB meter. I have the following code, and have been trying to go through the apple source project "SpeakHere", but it's a nightmare trying to go through all that, without knowing how it works first... Could anyone shed some light? Here's the code I have so far... (void)displayWaveForm { while (musicIsPlaying == YES { NSLog(@"%f",sizeof(AudioQueueLevelMeterState)); } } (IBAction)playMusic { if (musicIsPlaying == NO) { NSURL *url = [NSURL fileURLWithPath:[NSString stringWithFormat:@"%@/track7.wav",[[NSBundle mainBundle] resourcePath]]]; NSError *error; music = [[AVAudioPlayer alloc] initWithContentsOfURL:url error:&error]; music.numberOfLoops = -1; music.volume = 0.5; [music play]; musicIsPlaying = YES; [self displayWaveForm]; } else { [music pause]; musicIsPlaying = NO; } }

    Read the article

  • Creating self-referential tables with polymorphism in SQLALchemy

    - by Jace
    I'm trying to create a db structure in which I have many types of content entities, of which one, a Comment, can be attached to any other. Consider the following: from datetime import datetime from sqlalchemy import create_engine from sqlalchemy import Column, ForeignKey from sqlalchemy import Unicode, Integer, DateTime from sqlalchemy.orm import relation, backref from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class Entity(Base): __tablename__ = 'entities' id = Column(Integer, primary_key=True) created_at = Column(DateTime, default=datetime.utcnow, nullable=False) edited_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow, nullable=False) type = Column(Unicode(20), nullable=False) __mapper_args__ = {'polymorphic_on': type} # <...insert some models based on Entity...> class Comment(Entity): __tablename__ = 'comments' __mapper_args__ = {'polymorphic_identity': u'comment'} id = Column(None, ForeignKey('entities.id'), primary_key=True) _idref = relation(Entity, foreign_keys=id, primaryjoin=id == Entity.id) attached_to_id = Column(Integer, ForeignKey('entities.id'), nullable=False) #attached_to = relation(Entity, remote_side=[Entity.id]) attached_to = relation(Entity, foreign_keys=attached_to_id, primaryjoin=attached_to_id == Entity.id, backref=backref('comments', cascade="all, delete-orphan")) text = Column(Unicode(255), nullable=False) engine = create_engine('sqlite://', echo=True) Base.metadata.bind = engine Base.metadata.create_all(engine) This seems about right, except SQLAlchemy doesn't like having two foreign keys pointing to the same parent. It says ArgumentError: Can't determine join between 'entities' and 'comments'; tables have more than one foreign key constraint relationship between them. Please specify the 'onclause' of this join explicitly. How do I specify onclause?

    Read the article

  • Updating a specific key/value inside of an array field with MongoDB

    - by Jesta
    As a preface, I've been working with MongoDB for about a week now, so this may turn out to be a pretty simple answer. I have data already stored in my collection, we will call this collection content, as it contains articles, news, etc. Each of these articles contains another array called author which has all of the author's information (Address, Phone, Title, etc). The Goal - I am trying to create a query that will update the author's address on every article that the specific author exists in, and only the specified author block (not others that exist within the array). Sort of a "Global Update" to a specific article that affects his/her information on every piece of content that exists. Here is an example of what the content with the author looks like. { "_id" : ObjectId("4c1a5a948ead0e4d09010000"), "authors" : [ { "user_id" : null, "slug" : "joe-somebody", "display_name" : "Joe Somebody", "display_title" : "Contributing Writer", "display_company_name" : null, "email" : null, "phone" : null, "fax" : null, "address" : null, "address2" : null, "city" : null, "state" : null, "zip" : null, "country" : null, "image" : null, "url" : null, "blurb" : null }, { "user_id" : null, "slug" : "jane-somebody", "display_name" : "Jane Somebody", "display_title" : "Editor", "display_company_name" : null, "email" : null, "phone" : null, "fax" : null, "address" : null, "address2" : null, "city" : null, "state" : null, "zip" : null, "country" : null, "image" : null, "url" : null, "blurb" : null }, ], "tags" : [ "tag1", "tag2", "tag3" ], "title" : "Title of the Article" } I can find every article that this author has created by running the following command: db.content.find({authors: {$elemMatch: {slug: 'joe-somebody'}}}); So theoretically I should be able to update the authors record for the slug joe-somebody but not jane-somebody (the 2nd author), I am just unsure exactly how you reach in and update every record for that author. I thought I was on the right track, and here's what I've tried. b.content.update( {authors: {$elemMatch: {slug: 'joe-somebody'} } }, {$set: {address: '1234 Avenue Rd.'} }, false, true ); I just believe there's something I am missing in the $set statement to specify the correct author and point inside of the correct array. Any ideas? **Update** I've also tried this now: b.content.update( {authors: {$elemMatch: {slug: 'joe-somebody'} } }, {$set: {'authors.$.address': '1234 Avenue Rd.'} }, false, true );

    Read the article

  • eclipse django using wrong settings.py in pythonpath

    - by user1290264
    I have pydev/django installed in eclipse, and it runs fine. However, after adding a second django project to eclipse and running the server ('http://127.0.0.1:8000') the pythonpath seems to be stuck on project2 even when I run project1. As a summary, I have two django projects: project1, project2. When I run the django server for project1 I get: Validating models... 0 errors found Django version 1.5, using settings 'project1.settings' Development server is running at 'http://127.0.0.1:8000/' Quit the server with CTRL-BREAK. The above seems to suggest that django is using the correct settings file; however, when I go to 'http://127.0.0.1:8000/' it displays the urls from project2. Also, if I go to 'http://127.0.0.1:8000/admin' the models are getting pulled from the sqlite.db file in project2 as well. I've even tried removing project2 from eclipse entirely and now at 'http://127.0.0.1:8000/admin' I get this error: Python Path: ['C:\Users\Brad\workspaces\In Progress\project2', 'C:\Users\Brad\workspaces\In Progress\project2', 'C:\Python27\DLLs', 'C:\Python27\lib', 'C:\Python27\lib\plat-win', 'C:\Python27\lib\lib-tk', 'C:\Python27', 'C:\Python27\lib\site-packages', 'C:\Windows\system32\python27.zip'] If I run the server on a different port with project1 the path seems to be fine: runserver 7000 --noreload Then 'http://127.0.0.1:7000/' uses project1's paths, but it doesn't seem like I should have to do this. Note: I have setup the run configurations as correctly as I know how. In the main tab, the project and main module both point to the correct project (project1), and the "PYTHONPATH that will be used in the run:" includes project1. Also, I have cleared my browser history, cookies, and everything that chrome would let me delete.

    Read the article

  • Running migration on server when deploying with capistrano

    - by Pandafox
    Hi, I'm trying to deploy my rails application with capistrano, but I'm having some trouble running my migrations. In my development environment I just use sqlite as my database, but on my production server I use MySQL. The problem is that I want the migrations to run from my server and not my local machine, as I am not able to connect to my database from a remote location. My server setup: A debian box running ngnix, passenger, mysql and a git repository. What is the easiest way to do this? update: Here's my deploy script: set :application, "example.com" set :domain, "example.com" set :scm, :git set :repository, "[email protected]:project.git" set :use_sudo, false set :deploy_to, "/var/www/example.com" role :web, domain role :app, domain role :db, "localhost", :primary = true after "deploy", "deploy:migrate" When I run cap deploy, everything is working fine until it tries to run the migration. Here's the error I'm getting: ** [deploy:update_code] exception while rolling back: Capistrano::ConnectionError, connection failed for: localhost (Errno::ECONNREFUSED: Connection refused - connect(2)) connection failed for: localhost (Errno::ECONNREFUSED: Connection refused - connect(2))) This is why I need to run the migration from the server and not from my local machine. Any ideas?

    Read the article

  • RavenDB Ids and ASP.NET MVC3 Routes

    - by goober
    Hey all, Just building a quick, simple site with MVC 3 RC2 and RavenDB to test some things out. I've been able to make a bunch of projects, but I'm curious as to how Html.ActionLink() handles a raven DB ID. My example: I have a Document called "reasons" (a reason for something, just text mostly), which has reason text and a list of links. I can add, remove, and do everything else fine via my repository. Below is the part of my razor view that lists each reason in a bulleted list, with an Edit link as the first text: @foreach(var Reason in ViewBag.ReasonsList) { <li>@Html.ActionLink("Edit", "Reasons", "Edit", new { id = Reason.Id }, null) @Reason.ReasonText</li> <ul> @foreach (var reasonlink in Reason.ReasonLinks) { <li><a href="@reasonlink.URL">@reasonlink.URL</a></li> } </ul> } The Problem This works fine, except for the edit link. While the values and code here appear to work directly (i.e the link is firing directly), RavenDB saves my document's ID as "reasons/1". So, when the URL happens and it passes the ID, the resulting route is "http://localhost:4976/Reasons/Edit/reasons/2". So, the ID is appended correctly, but MVC is interpreting it as its own route. Any suggestions on how I might be able to get around this? Do I need to create a special route to handle it or is there something else I can do?

    Read the article

  • Convert UCS-2 characters to UTF-8 Using C#

    - by quanticle
    I'm pulling some internationalized text from a MS SQL Server 2005 database. As per the defaults for that DB, the characters are stored as UCS-2. However, I need to output the data in UTF-8 format, as I'm sending it out over the web. Currently, I have the following code to convert: SqlString dbString = resultReader.GetSqlString(0); byte[] dbBytes = dbString.GetUnicodeBytes(); byte[] utf8Bytes = System.Text.Encoding.Convert(System.Text.Encoding.Unicode, System.Text.Encoding.UTF8, dbBytes); System.Text.UTF8Encoding encoder = new System.Text.UTF8Encoding(); string outputString = encoder.GetString(utf8Bytes); However, when I examine the output in the browser, it appears to be garbage, no matter what I set the encoding to. What am I missing? EDIT: In response to the answers below, the reason I thought I had to perform a conversion is because I can output literal multibyte strings just fine. For example: OutputControl.Text = "????????????????????????????????????????????????????????????????"; works. Here, OutputControl is an ASP.Net Literal. However, OutputControl.Text = outputString; //Output from above snippet results in mangled output as described above. My hypothesis was that the database's output was somehow getting mangled by ASP.Net. If that's not the case, then what are some other possibilities?

    Read the article

  • Best way to test a Delphi application

    - by Osama ALASSIRY
    I have a Delphi application that has many dependencies, and it would be difficult to refactor it to use DUnit (it's huge), so I was thinking about using something like AutomatedQA's TestComplete to do the testing from the front-end UI. My main problem is that a bugfix or new feature sometimes breaks old code that was previously tested (manually), and used to work. I have setup the application to use command-line switches to open-up a specific form that could be tested, and I can create a set of values and clicks needed to be done. But I have a few questions before I do anything drastic... (and before purchasing anything) Is it worth it? Would this be a good way to test? The result of the test should in my database (Oracle), is there an easy way in testcomplete to check these values (multiple fields in multiple tables)? I would need to setup a test database to do all the automated testing, would there be an easy way to automate re-setting the test db? Other than drop user cascade, create user,..., impdp. Is there a way in testcomplete to specify command-line parameters for an exe? Does anybody have any similar experiences.

    Read the article

  • Which key value store is the most promising/stable?

    - by Mike Trpcic
    I'm looking to start using a key/value store for some side projects (mostly as a learning experience), but so many have popped up in the recent past that I've got no idea where to begin. Just listing from memory, I can think of: CouchDB MongoDB Riak Redis Tokyo Cabinet Berkeley DB Cassandra MemcacheDB And I'm sure that there are more out there that have slipped through my search efforts. With all the information out there, it's hard to find solid comparisons between all of the competitors. My criteria and questions are: (Most Important) Which do you recommend, and why? Which one is the fastest? Which one is the most stable? Which one is the easiest to set up and install? Which ones have bindings for Python and/or Ruby? Edit: So far it looks like Redis is the best solution, but that's only because I've gotten one solid response (from ardsrk). I'm looking for more answers like his, because they point me in the direction of useful, quantitative information. Which Key-Value store do you use, and why? Edit 2: If anyone has experience with CouchDB, Riak, or MongoDB, I'd love to hear your experiences with them (and even more so if you can offer a comparative analysis of several of them)

    Read the article

  • How to insert records in master/detail relationship

    - by croceldon
    I have two tables: OutputPackages (master) |PackageID| OutputItems (detail) |ItemID|PackageID| OutputItems has an index called 'idxPackage' set on the PackageID column. ItemID is set to auto increment. Here's the code I'm using to insert masters/details into these tables: //fill packages table for i := 1 to 10 do begin Package := TfPackage(dlgSummary.fcPackageForms.Forms[i]); if Package.PackageLoaded then begin with tblOutputPackages do begin Insert; FieldByName('PackageID').AsInteger := Package.ourNum; FieldByName('Description').AsString := Package.Title; FieldByName('Total').AsCurrency := Package.Total; Post; end; //fill items table for ii := 1 to 10 do begin Item := TfPackagedItemEdit(Package.fc.Forms[ii]); if Item.Activated then begin with tblOutputItems do begin Append; FieldByName('PackageID').AsInteger := Package.ourNum; FieldByName('Description').AsString := Item.Description; FieldByName('Comment').AsString := Item.Comment; FieldByName('Price').AsCurrency := Item.Price; Post; //this causes the primary key exception end; end; end; end; This works fine as long as I don't mess with the MasterSource/MasterFields properties in the IDE. But once I set it, and run this code I get an error that says I've got a duplicate primary key 'ItemID'. I'm not sure what's going on - this is my first foray into master/detail, so something may be setup wrong. I'm using ComponentAce's Absolute Database for this project. How can I get this to insert properly? Update Ok, I removed the primary key restraint in my db, and I see that for some reason, the autoincrement feature of the OutputItems table isn't working like I expected. Here's how the OutputItems table looks after running the above code: ItemID|PackageID| 1 |1 | 1 |1 | 2 |2 | 2 |2 | I still don't see why all the ItemID values aren't unique.... Any ideas?

    Read the article

  • C# reference collection for storing reference types

    - by ivo s
    I like to implement a collection (something like List<T>) which would hold all my objects that I have created in the entire life span of my application as if its an array of pointers in C++. The idea is that when my process starts I can use a central factory to create all objects and then periodically validate/invalidate their state. Basically I want to make sure that my process only deals with valid instances and I don't re-fetch information I already fetched from the database. So all my objects will basically be in one place - my collection. A cool thing I can do with this is avoid database calls to get data from the database if I already got it (even if I updated it after retrieval its still up-to-date if of course some other process didn't update it but that a different concern). I don't want to be calling new Customer("James Thomas"); again if I initted James Thomas already sometime in the past. Currently I will end up with multiple copies of the same object across the appdomain - some out of sync other in sync and even though I deal with this using timestamp field on the MSSQL server I'd like to keep only one copy per customer in my appdomain (if possible process would be better). I can't use regular collections like List or ArrayList for example because I cannot pass parameters by their real local reference to the their existing Add() methods where I'm creating them using ref so that's not to good I think. So how can this be implemented/can it be implemented at all ? A 'linked list' type of class with all methods working with ref & out params is what I'm thinking now but it may get ugly pretty quickly. Is there another way to implement such collection like RefList<T>.Add(ref T obj)? So bottom line is: I don't want re-create an object if I've already created it before during the entire application life unless I decide to re-create it explicitly (maybe its out-of-date or something so I have to fetch it again from the db). Is there alternatives maybe ?

    Read the article

  • How to output KML by GAE

    - by Niklas R
    Hi I use KML for a google map where entities have a geopt.db coordinate and soft memory limit was exceeded with 213.465 MB after servicing 1 requests total. The log says /list.kml 200 13130ms 10211cpu_ms 4238api_cpu_ms The file list.kml which outputs about 455,7 KB is a template as follows <?xml version="1.0" encoding="UTF-8"?><kml xmlns="http:// www.opengis.net/kml/2.2" xmlns:gx="http://www.google.com/kml/ext/2.2" xmlns:kml="http://www.opengis.net/kml/2.2" xmlns:atom="http:// www.w3.org/2005/Atom"> <Document>{% for a in list %} <Placemark> <name> </name> <description> <![CDATA[<a href="http://{{host}}/{{a.key.id}}"> {{ a.title }} </a> <br/>{{a.text}}]]> </description> <Style> <IconStyle> <Icon> <href> http://www.google.com/intl/en_us/mapfiles/ms/icons/green-dot.png </href> </Icon> </IconStyle> </Style> <Point> <coordinates> {{a.geopt.lon|floatformat:2}},{{a.geopt.lat|floatformat:2}} </coordinates> </Point> </Placemark> {% endfor %} </Document> </kml> Is there a memory leak in the template or the python that passes the list variable? Can I improve using other template engine or other framework than default? Is kmz compression a good idea in this case? Thanks in advance for any suggestion where or how to change the code.

    Read the article

  • Eager loading OneToMany in Hibernate with JPA2

    - by pihentagy
    I have a simple @OneToMany between Person and Pet entities: @OneToMany(mappedBy="owner", cascade=CascadeType.ALL, fetch=FetchType.EAGER) public Set<Pet> getPets() { return pets; } I would like to load all Persons with associated Pets. So I came up with this (inside a test class): @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration public class AppTest { @Test @Rollback(false) @Transactional(readOnly = false) public void testApp() { CriteriaBuilder qb = em.getCriteriaBuilder(); CriteriaQuery<Person> c = qb.createQuery(Person.class); Root<Person> p1 = c.from(Person.class); SetJoin<Person, Pet> join = p1.join(Person_.pets); TypedQuery<Person> q = em.createQuery(c); List<Person> persons = q.getResultList(); for (Person p : persons) { System.out.println(p.getName()); for (Pet pet : p.getPets()) { System.out.println("\t" + pet.getNick()); } } However, turning the SQL logging on shows, that it executes 3 queries (having 2 Persons in the DB). Hibernate: select person0_.id as id0_, person0_.name as name0_, person0_.sex as sex0_ from Person person0_ inner join Pet pets1_ on person0_.id=pets1_.owner_id Hibernate: select pets0_.owner_id as owner3_0_1_, pets0_.id as id1_, pets0_.id as id1_0_, pets0_.nick as nick1_0_, pets0_.owner_id as owner3_1_0_ from Pet pets0_ where pets0_.owner_id=? Hibernate: select pets0_.owner_id as owner3_0_1_, pets0_.id as id1_, pets0_.id as id1_0_, pets0_.nick as nick1_0_, pets0_.owner_id as owner3_1_0_ from Pet pets0_ where pets0_.owner_id=? Any tips? Thanks Gergo

    Read the article

  • Is this the right way to organize my database tables?

    - by Moss
    So I'm making a website that allows users to build contact lists. So their are users, the users have lists, and the lists have contacts. It seems to me that I need 3 tables for this but I just want to make sure. There would be a User table of course, and then a "List of Lists" table that has the username, and listname, as primary key along with whatever other info we want to attach to the lists as a whole. Finally, for lack of a better word, the List table which would again have the username/listname p.k., then the contact ID and notes and such that the user attaches to that contact on that specific list. I hope that is a clear explanation. For some reason I feel unsure about this arrangement. For one thing if the website becomes popular the List table could swell to billions of rows. And it also feels a little weird that everybody's list info is all jumbled up in the same table. I suppose I could create separate tables for each user and even for each list but that seems like a bad idea for other reasons. My db explanation assumes I can use foreign keys on my tables which at the moment isn't actually an option. If I can't get InnoDB tables enabled I will probably use ID's for the lists instead of depending on a compound key. Maybe I should do this anyway?

    Read the article

  • Creating search functionality with Laravel 4

    - by Mitch Glenn
    I am trying to create a way for users to search through all the products on a website. When they search for "burton snowboards", I only want the snowboards with the brand burton to appear in the results. But if they searched only "burton", then all products with the brand burton should appear. This is what I have attempted to write but isn't working for multiple reasons. Controller: public function search(){ $input = Input::all(); $v= Validator::make($input, Product::$rules); if($v->passes()) { $searchTerms = explode(' ', $input); $searchTermBits = array(); foreach ($searchTerms as $term) { $term = trim($term); if (!empty($term)){ $searchTermBits[] = "search LIKE '%$term%'"; } } $result = DB::table('products') ->select('*') ->whereRaw(". implode(' AND ', $searchTermBits) . ") ->get(); return View::make('layouts/search', compact('result')); } return Redirect::route('/'); } I am trying to recreate the first solution given for this stackoverflow.com problem The first problem I have identified is that i'm trying to explode the $input, but it's already an array. So i'm not sure how to go about fixing that. And the way I have written the ->whereRaw(". implode(' AND ', $searchTermBits) . "), i'm sure isn't correct. I'm not sure how to fix these problems though, any insights or solutions will be greatly appreciated.

    Read the article

  • Why do my CouchDB databases grow so fast?

    - by konrad
    I was wondering why my CouchDB database was growing to fast so I wrote a little test script. This script changes an attributed of a CouchDB document 1200 times and takes the size of the database after each change. After performing these 1200 writing steps the database is doing a compaction step and the db size is measured again. In the end the script plots the databases size against the revision numbers. The benchmarking is run twice: The first time the default number of document revision (=1000) is used (_revs_limit). The second time the number of document revisions is set to 1. The first run produces the following plot The second run produces this plot For me this is quite an unexpected behavior. In the first run I would have expected a linear growth as every change produces a new revision. When the 1000 revisions are reached the size value should be constant as the older revisions are discarded. After the compaction the size should fall significantly. In the second run the first revision should result in certain database size that is then keeps during the following writing steps as every new revision leads to the deletion of the previous one. I could understand if there is a little bit of overhead needed to manage the changes but this growth behavior seems weird to me. Can anybody explain this phenomenon or correct my assumptions that lead to the wrong expectations?

    Read the article

  • ASP.NET DropDownList posting ""

    - by Daniel
    I am using ASP.NET forms version 3.5 in VB I have a dropdownlist that is filled with data from a DB with a list of countries The code for the dropdown list is <label class="ob_label"> <asp:DropDownList ID="lstCountry" runat="server" CssClass="ob_forminput"> </asp:DropDownList> Country*</label> And the code that the list is Dim selectSQL As String = "exec dbo.*******************" ' Define the ADO.NET objects. Dim con As New SqlConnection(connectionString) Dim cmd As New SqlCommand(selectSQL, con) Dim reader As SqlDataReader ' Try to open database and read information. Try con.Open() reader = cmd.ExecuteReader() ' For each item, add the author name to the displayed ' list box text, and store the unique ID in the Value property. Do While reader.Read() Dim newItem As New ListItem() newItem.Text = reader("AllSites_Countries_Name") newItem.Value = reader("AllSites_Countries_Id") CType(LoginViewCart.FindControl("lstCountry"), DropDownList).Items.Add(newItem) Loop reader.Close() CType(LoginViewCart.FindControl("lstCountry"), DropDownList).SelectedValue = 182 Catch Err As Exception Response.Redirect("~/error-on-page/") MailSender.SendMailMessage("*********************", "", "", OrangeBoxSiteId.SiteName & " Error Catcher", "<p>Error in sub FillCountry</p><p>Error on page:" & HttpContext.Current.Request.Url.AbsoluteUri & "</p><p>Error details: " & Err.Message & "</p>") Response.Redirect("~/error-on-page/") Finally con.Close() End Try When the form is submitted an error occurs which says that the string "" cannot be converted to the datatype integer. For some reason the dropdownlist is posting "" rather than the value for the selected country.

    Read the article

  • Undefined method `add' on a cucumber step that usually works.

    - by Josiah Kiehl
    I have a path defined: when /the admin home\s?page/ "/admin/" I have scenario that is passing: Scenario: Let admins see the admin homepage Given "pojo" is logged in And "pojo" is an "admin" And I am on the admin home page Then I should see "Hi there." And I have a scenario that is failing: Scenario: Review flagged photo Given "pojo" is logged in And "pojo" is an "admin" ...bunch of steps that create stuff in the database... And I am on the admin home page Then ... the rest of the steps The step that fails in the second one is "And I am on the admin home page" which passes just fine in the first scenario. Here's the error I get: And I am on the admin home page # features/step_definitions/web_steps.rb:18 undefined method `add' for {}:Hash (NoMethodError) ./app/controllers/admin_controller.rb:13:in `index' ./app/controllers/admin_controller.rb:11:in `each' ./app/controllers/admin_controller.rb:11:in `index' /usr/lib/ruby/1.8/benchmark.rb:308:in `realtime' ./features/step_definitions/web_steps.rb:19:in `/^(?:|I )am on (.+)$/' features/admin.feature:52:in `And I am on the admin home page' This is very odd... why would it be fine in the first case, and not in the second where the only difference are a bunch of steps that create records in the db? [edit] Here's the add stuff to database step: Given /^there is a "([^\"]*)" with the following:$/ do |model, table| model.constantize.create!(table.rows_hash) end

    Read the article

  • problem configure JBoss to work with JNDI

    - by Spiderman
    I am trying to bind connection to the DB using JNDI in my application that runs on JBoss. I did the following: I created the datasource file oracle-ds.xml filled it with the relevant xml elements: <datasources> <local-tx-datasource> <jndi-name>bilby</jndi-name> ... </local-tx-datasource> </datasources> and put it in the folder \server\default\deploy Added the relevant oracle jar file than in my application I performed: JndiObjectFactoryBean factory = new JndiObjectFactoryBean(); factory.setJndiName("bilby"); try{ factory.afterPropertiesSet(); dataSource = factory.getObject(); } catch(NamingException ne) { ne.printStackTrace(); } and this cause the error: javax.naming.NameNotFoundException: bilby not bound then in the output after this error occured I saw the line: 18:37:56,560 INFO [ConnectionFactoryBindingService] Bound ConnectionManager 'jb oss.jca:service=DataSourceBinding,name=bilby' to JNDI name 'java:bilby' So what is my configuration problem? I think that it may be that JBoss first loads and runs the .war file of my application and only then it loads the oracle-ds.xml that contain my data-source definition. The problem is that they are both located in the same folder. Is there a way to define priority of loading them, or maybe this is not the problem at all. Any idea?

    Read the article

  • Dynamically created controls and the ASP.NET page lifecycle

    - by Dirk
    I'm working on an ASP.NET project in which the vast majority of the forms are generated dynamically at run time (form definitions are stored in a DB for customizability). Therefore, I have to dynamically create and add my controls to the Page every time OnLoad fires, regardless of IsPostBack. This has been working just fine and .NET takes care of managing ViewState for these controls. protected override void OnLoad(EventArgs e) { base.OnLoad(e); RenderDynamicControls() } private void RenderDynamicControls(){ //1. call service layer to retrieve form definition //2. create and add controls to page container } I have a new requirement in which if a user clicks on a given button (this button is created at design time) the page should be re-rendered in a slightly different way. So in addition to the code that executes in OnLoad (i.e. RenderDynamicControls()), I have this code: protected void MyButton_Click(object sender, EventArgs e) { RenderDynamicControlsALittleDifferently() } private void RenderDynamicControlsALittleDifferently() (){ //1. clear all controls from the page container added in RenderDynamicControls() //2. call service layer to retrieve form definition //3. create and add controls to page container } My question is, is this really the only way to accomplish what I'm after? It seems beyond hacky to effectively render the form twice simply to respond to a button click. I gather from my research that this is simply how the page-lifecycle works in ASP.NET: Namely, that OnLoad must fire on every Postback before child events are invoked. Still, it's worthwhile to check with the SO community before having to drink the kool-aid. On a related note, once I get this feature completed, I'm planning on throwing an UpdatePanel on the page to perform the page updates via Ajax. Any code/advice that make that transition easier would be much appreciated. Thanks

    Read the article

  • Best practises for Magento Deployment

    - by Spongeboy
    I am looking setting up a deployment process for a highly customised Magento site, and was wondering how other people do this. I will be setting up dev, UAT and prod environments. All the Magento files will be in source control (SVN). At this stage, I can't see any requirements for changing the DB, so the 3 databases will be manually maintained. Specifically, How do you apply Magento upgrades? (Individually in each env, or on dev then roll out, or just give up on upgrades?) What files/folders do leave alone in each environment (e.g. magento/app/etc/local.xml) Do you restrict developers to editing specific files/folders? Do you restrict theme designers to editing specific files/folders? How do you manage database changes? Theme Designer Files/Folders Designers can restricted to editing the following folders- app/design/frontend/your_interface/your_theme/layout/ app/design/frontend/your_interface/your_theme/template/ app/design/frontend/your_interface/your_theme/locale/ skin/frontend/your_interface/your_theme/ Extension Developer Files/Folders Extension developers can edit the following folders/files- /app/code/local /app/etc/modules/<Namespace>_<Module>.xml Database environment management As the store's base URL is stored in the database, you cannot just copy databases between environments. Options include- Overriding the base url in php. Blog article on setting up dev and staging databases Changing the base url in the database after copying. (Where is this stored?) Doing a MySQLDump or backup, then doing a replace on the URL in the SQL file.

    Read the article

  • how to find the last instance of a setting in a config file

    - by Glenn Kelley
    I am trying to figure out how to find the last entry of a string in multiple config files across a server. Each of the strings will be in the /home/***usernamewouldbehere/public_html/typo3conf/localconf.php file In short - the last entry in the config files will point to the database server the application is utilizing - and we need to know which accounts point to which db server. While I can run something like this - grep "$_db_host" /home/*/public_html/conf/localconf.php It does not really help much because it gives us way to much information ... and not what we really need. What i really need to know is the last entry of this string $_db_host = 'xx'; and to sort them out in an export file Since the config files may have multiple entries (example below) $_db_host = 'localhost'; $_db_host = '10.0.1.234'; It would be great to list in a file all of those that have the entry for 'localhost' and then list all of those that have the entry for '10.0.1.234' (or whichever server there may be there) but even if I need to do that manually that would be great. I am not sure how to get to it using Awk - ... and really stuck What I am hoping for is something that would be piped as follows db_host = localhost /home/username1/www/conf/localconf.php db_host = localhost / home/username2/public_html/conf/localconf.php db_host= '10.1.2.23' /home/username55/public_html/conf/localconf.php hoping that helps you help me :-)

    Read the article

  • python mongokit Connection() AssertionError

    - by zalew
    just installed mongokit and can't figure out why I get AssertionError python console: >>> from mongokit import Connection >>> c = Connection() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/mongokit-0.5.3-py2.6.egg/mongokit/connection.py", line 35, in __init__ super(Connection, self).__init__(*args, **kwargs) File "build/bdist.linux-i686/egg/pymongo/connection.py", line 169, in __init__ File "build/bdist.linux-i686/egg/pymongo/connection.py", line 338, in __find_master File "build/bdist.linux-i686/egg/pymongo/connection.py", line 226, in __master File "build/bdist.linux-i686/egg/pymongo/database.py", line 220, in command File "build/bdist.linux-i686/egg/pymongo/collection.py", line 356, in find_one File "build/bdist.linux-i686/egg/pymongo/cursor.py", line 485, in next File "build/bdist.linux-i686/egg/pymongo/cursor.py", line 461, in _refresh File "build/bdist.linux-i686/egg/pymongo/cursor.py", line 429, in __send_message File "build/bdist.linux-i686/egg/pymongo/helpers.py", line 98, in _unpack_response AssertionError >>> mongodb console: Wed Mar 31 10:27:34 connection accepted from 127.0.0.1:60480 #30 Wed Mar 31 10:27:34 end connection 127.0.0.1:60480 db 1.5 pymongo 1.5 (tested also on 1.4.) mongokit 0.5.3 (also 0.5.2)

    Read the article

  • Is it possible to execute a function in Mongo that accepts any parameters?

    - by joshua.clayton
    I'm looking to write a function to do a custom query on a collection in Mongo. Problem is, I want to reuse that function. My thought was this (obviously contrived): var awesome = function(count) { return function() { return this.size == parseInt(count); }; } So then I could do something along the lines of: db.collection.find(awesome(5)); However, I get this error: error: { "$err" : "error on invocation of $where function: JS Error: ReferenceError: count is not defined nofile_b:1" } So, it looks like Mongo isn't honoring scope, but I'm really not sure why. Any insight would be appreciated. To go into more depth of what I'd like to do: A collection of documents has lat/lng values, and I want to find all documents within a concave or convex polygon. I have the function written but would ideally be able to reuse the function, so I want to pass in an array of points composing my polygon to the function I execute on Mongo's end. I've looked at Mongo's geospatial querying and it currently on supports circle and box queries - I need something more complex.

    Read the article

< Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >