Search Results

Search found 10042 results on 402 pages for 'orient db'.

Page 353/402 | < Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >

  • Node & Redis: Crucial Design Issues in Production Mode

    - by Ali
    This question is a hybrid one, being both technical and system design related. I'm developing the backend of an application that will handle approx. 4K request per second. We are using Node.js being super fast and in terms of our database struction we are using MongoDB, with Redis being a layer between Node and MongoDB handling volatile operations. I'm quite stressed because we are expecting concurrent requests that we need to handle carefully and we are quite close to launch. However I do not believe I've applied the correct approach on redis. I have a class Student, and they constantly change stages(such as 'active', 'doing homework','in lesson' etc. Thus I created a Redis DB for each state. (1 for being 'active', 2 for being 'doing homework'). Above I have the structure of the 'active' students table; xa5p - JSON stringified object #1 pQrW - JSON stringified object #2 active_student_table - {{studentId:'xa5p'}, {studentId:'pQrW'}} Since there is no 'select all keys method' in Redis, I've been suggested to use a set such that when I run command 'smembers' I receive the keys and later on do 'get' for each id in order to find a specific user (lets say that age older than 15). I've been also suggested that in fact I used never use keys in production mode. My question is, no matter how 'conceptual' it is, what specific things I should avoid doing in Node & Redis in production stage?. Are they any issues related to my design? Students must be objects and I sure can list them in a list but I haven't done yet. Is it that crucial in production stage?

    Read the article

  • Advice needed: cold backup for SQL Server 2008 Express?

    - by Mikey Cee
    What are my options for achieving a cold backup server for SQL Server Express instance running a single database? I have an SQL Server 2008 Express instance in production that currently represents a single point of failure for my application. I have a second physical box sitting at the installation that is currently doing nothing. I want to somehow replicate my database in near real time (a little bit of data loss is acceptable) to the second box. The database is very small and resources are utilized very lightly. In the case that the production server dies, I would manually reconfigure my application to point to the backup server instead. Although Express doesn't support log shipping, I am thinking that I could manually script a poor man's version of it, where I use batch files to take the logs and copy them across the network and apply them to the second server at 5 minute intervals. Does anyone have any advice on whether this is technically achievable, or if there is a better way to do what I am trying to do? Note that I want to avoid having to pay for the full version of SQL Server and configure mirroring as I think it is an overkill for this application. I understand that other DB platforms may present suitable options (eg. a MySQL Cluster), but for the purposes of this discussion, let's assume we have to stick to SQL Server.

    Read the article

  • Django: Paginator + raw SQL query

    - by Silver Light
    Hello! I'm using Django Paginator everywhere on my website and even wrote a special template tag, to make it more convenient. But now I got to a state, where I need to make a complex custom raw SQL query, that without a LIMIT will return about 100K records. How can I use Django Pagintor with custom query? Simplified example of my problem: My model: class PersonManager(models.Manager): def complicated_list(self): from django.db import connection #Real query is much more complex cursor.execute("""SELECT * FROM `myapp_person`"""); result_list = [] for row in cursor.fetchall(): result_list.append(row[0]); return result_list class Person(models.Model): name = models.CharField(max_length=255); surname = models.CharField(max_length=255); age = models.IntegerField(); objects = PersonManager(); The way I use pagintation with Django ORM: all_objects = Person.objects.all(); paginator = Paginator(all_objects, 10); try: page = int(request.GET.get('page', '1')) except ValueError: page = 1 try: persons = paginator.page(page) except (EmptyPage, InvalidPage): persons = paginator.page(paginator.num_pages) This way, Django get very smart, and adds LIMIT to a query when executing it. But when I use custom manager: all_objects = Person.objects.complicated_list(); all data is selected, and only then python list is sliced, which is VERY slow. How can I make my custom manager behave similar like built in one?

    Read the article

  • ASP Dot Net : How to repeat HTML parts with minor differences on a page?

    - by tinky05
    It's a really simple problem. I've got HTML code like this : <div> <img src="image1.jpg" alt="test1" /> </div> <div> <img src="image2.jpg" alt="test2" /> </div> <div> <img src="image3.jpg" alt="test3" /> </div> etc... The data is comming from a DB (image name, alt text). In JAVA, I would do something like : save the info in array in the back end. For the presentation I would loop through it with JSTL : <c:foeach items="${data}" var="${item}> <div> <img src="${item.image}" alt="${item.alt}" /> </div> </c:foreach> What's the best practice in ASP.net I just don't want to create a string with HTML code in it in the "code behind", it's ugly IMO.

    Read the article

  • EF 4.0 Code only assocation from abstract to derived

    - by Jeroen
    Using EF 4.0 Code only i want to make an assocation between an abstract and normal class. I have class 'Item', 'ContentBase' and 'Test'. 'ContentBase' is abstract and 'Test' derives from it. 'ContentBase' has a property 'Item' that links to an instance of 'Item'. So that 'Test.Item' or any class that derives from 'ContentBase' has an 'Item' navigation property. In my DB every record for Test has a matching record for Item. public class Item { public int Id { get; set;} } public abstract class ContentBase { public int ContentId { get; set;} public int Id { get; set;} public Item Item { get; set;} } public class Test : ContentBase { public string Name { get; set;} } now some init code public void SomeInitFunction() { var itemConfig = new EntityConfiguration<Item>(); itemConfig.HasKey(p => p.Id); itemConfig.Property(p => p.Id).IsIdentity(); this.ContextBuilder.Configurations.Add(itemConfig); var testConfig = new EntityConfiguration<Test>(); testConfig.HasKey(p => p.ContentId); testConfig.Property(p => p.ContentId).IsIdentity(); // the problem testConfig.Relationship(p => p.Item).HasConstraint((p, q) => p.Id == q.Id); this.ContextBuilder.Configurations.Add(testConfig); } This gives an error: A key is registered for the derived type 'Test'. Keys must be registered for the root type 'ContentBase'. anyway i try i get an error. What am i a doing wrong?

    Read the article

  • C# algorithm for figuring out different possible combinations...

    - by ttomsen
    I have 10 boxes, each box can hold one item from a group/type of items, each 'group' type only fits in one of the 10 box types. The pool can have n number of items. The groups have completely distinct items. Each item has a price, i want an algorithm that will generate all the different possibilities, so i can figure out different price points. so a smaller picture of the problem BOX A - can have item 1,2,3,4 in it BOX B - can have item 6,7,8,9,10,11,12 BOX C - can have item 13,15,16,20,21 The items are stored in a db, they have a column which denotes which box they can go in. All box types are stored in an array, and I can put the items in a generic list. Anyone see a straightforward way to do this. I have tried doing 10 nested foreach's to see if i could find a simpler way. The nested loops will take MANY hours to run. the nested for each's basically pull all combinations, then calculate a rank for each combination, and store the top 10 ranked combination of items for output

    Read the article

  • Nhibernate - getting single column from other table

    - by Muhammad Akhtar
    I have following tables Employee: ID,CompanyID,Name //CompanyID is foriegn key of Company Table Company: CompanyID, Name I want to map this to the following class: public class Employee { public virtual Int ID { get; set; } public virtual Int CompanyID { get; set; } public virtual string Name { get; set; } public virtual string CompanyName { get; set; } protected Employee() { } } here is my xml class <class name="Employee" table="Employee" lazy="true"> <id name="Id" type="Int32" column="Id"> <generator class="native" /> </id> <property name="CompanyID" column="CompanyID" type="Int32" not-null="false"/> <property name="Name" column="Name" type="String" length="100" not-null="false"/> What I need to add in xml class to map CompanyName in my result? here is my code... public ArrayList getTest() { ISession session = NHibernateHelper.GetCurrentSession(); string query = "select Employee.*,(Company.Name)CompanyName from Employee inner join Employee on Employee.CompanyID = Company.CompanyID"; ArrayList document = (ArrayList)session.CreateSQLQuery(query, "Employee", typeof(Document)).List(); return document; } but in the returned result, I am getting CompanyName is null is result set and other columns are fine. Note:In DB, tables don't physical relation Please suggest my solution ------ Thanks

    Read the article

  • how to synchronize database table and directory with php

    - by twmulloy
    hello, I have a directory with files and a database table with what should be the same files. I would like to be able to synchronize the database table with the directory. What would be the most efficient way to do this? or would I realistically only be able to do this in a brute manner? Here's my approach: 1. retrieve all of the files in the directory as array 2. retrieve all of the filenames in the database table as array 3. loop through the file values in the directory array and use in_array() on the database table array to verify the filename is in that array, and if not then start building an array to insert the missing filenames. run db query to add each missing file row to database table 4. loop through directory array and use in_array() on the directory array and anything not found in the directory array will just be deleted from the table. Is there a better way to go about this? or something better for this in php than in_array()?

    Read the article

  • If we make a number every millisecond, how much data would we have in a day?

    - by Roger Travis
    I'm a bit confused here... I'm being offered to get into a project, where would be an array of certain sensors, that would give off reading every millisecond ( yes, 1000 reading in a second ). Reading would be a 3 or 4 digit number, for example like 818 or 1529. This reading need to be stored in a database on a server and accessed remotely. I never worked with such big amounts of data, what do you think, how much in terms of MBs reading from one sensor for a day would be?... 4(digits)x1000x60x60x24 ... = 345600000 bits ... right ? about 42 MB per day... doesn't seem too bad, right? therefor a DB of, say, 1 GB, would hold 23 days of info from 1 sensor, correct? I understand that MySQL & PHP probably would not be able to handle it... what would you suggest, maybe some aps? azure? oracle? ... Thansk!

    Read the article

  • Database localization

    - by Don
    Hi, I have a number of database tables that contain name and description columns which need to be localized. My initial attempt at designing a DB schema that would support this was something like: product ------- id name description local_product ------- id product_id local_name local_description locale_id locale ------ id locale However, this solution requires a new local_ table for every table that contains name and description columns that require localization. In an attempt to avoid this overhead I redesigned the schema so that only a single localization table is needed product ------- id localization_id localization ------- id local_name local_description locale_id locale ------ id locale Here's an example of the data which would be stored in this schema when there are 2 tables (product and country) requiring localization: country id, localization_id ----------------------- 1, 5 product id, localization_id ----------------------- 1, 2 localization id, local_name, local_description, locale_id ------------------------------------------------------ 2, apple, a delicious fruit, 2 2, pomme, un fruit délicieux, 3 2, apfel, ein köstliches Obst, 4 5, ireland, a small country, 2 5, irlande, un petite pay, 3 locale id, locale -------------- 2, en 3, fr 4, de Notice that the compound primary key of the localization table is (id, locale_id), but the foreign key in the product table only refers to the first element of this compound PK. This seems like 'a bad thing' from the POV of normalization. Is there any way I can fix this problem, or alternatively, is there a completely different schema that supports localization without creating a separate table for each localizable table? Update: A number of respondents have proposed a solution that requires creating a separate table for each localizable table. However, this is precisely what I'm trying to avoid. The schema I've proposed above almost solves the problem to my satisfaction, but I'm unhappy about the fact that the localization_id foreign keys only refer to part of the corresponding primary key in the localization table. Thanks, Don

    Read the article

  • How many users are sufficient to make a heavy load for web application

    - by galymzhan
    I have a web application, which has been suffering high load recent days. The application runs on single server which has 8-core Intel CPU and 4gb of RAM. Software: Drupal 5 (Apache 2, PHP5, MySQL5) running on Debian. After reaching 500 authenticated and 200 anonymous users (simultaneous), the application drastically decreases its performance up to total failure. The biggest load comes from authenticated users, who perform activities, causing insert/update/deletes on db. I think mysql is a bottleneck. Is it normal to slow down on such number of users? EDIT: I forgot to mention that I did some kind of profiling. I runned commands top, htop and they showed me that all memory was being used by MySQL! After some time MySQL starts to perform terribly slow, site goes down, and we have to restart/stop apache to reduce load. Administrators said that there was about 200 active mysql connections at that moment. The worst point is that we need to solve this ASAP, I can't do deep profiling analysis/code refactoring, so I'm considering 2 ways: my tables are MyIsam, I heard they use table-level locking which is very slow, is it right? could I change it to Innodb without worry? what if I take MySQL, and move it to dedicated machine with a lot of RAM?

    Read the article

  • Chinese records of sqlite databse are in not accessable in iphone app

    - by Mas
    Hi! I'm creating an iphone app. In which I'm reading data from the sqlite db and presenting it in the tableview control. Problem is that the data is in chinese language. Due to unknown reason, when I read/fetch record from the sqlite table, some records are presented and other are missed by objc. Even objc reads some columns of the missing values, But returns the nil results of the some columns. In reality these columns are not empty. I tried many solutions but didn't find any suitable. Same database is working perfectly in the android version of the app. Any one can help here is the code of reading the chinese records from table Quote *q = [[Quote alloc] init]; q.catId = sqlite3_column_int(statement, 0); q.subCatId = sqlite3_column_int(statement, 1); q.quote = [NSString stringWithUTF8String:(char *)sqlite3_column_text(statement, 2)]; and here is the some records which iphone misses ??????·??? ??????,????????! ??????,??????? Thanks

    Read the article

  • Namespace constraint with generic class decleration

    - by SomeGuy
    Good afternoon people, I would like to know if (and if so how) it is possible to define a namespace as a constraint parameter in a generic class declaration. What I have is this: namespace MyProject.Models.Entities <-- Contains my classes to be persisted in db namespace MyProject.Tests.BaseTest <-- Obvious i think Now the decleration of my 'BaseTest' class looks like so; public class BaseTest<T> This BaseTest does little more (at the time of writing) than remove all entities that were added to the database during testing. So typically I will have a test class declared as: public class MyEntityRepositoryTest : BaseTest<MyEntity> What i would LIKE to do is something similar to the following: public class BaseTest<T> where T : <is of the MyProject.Models.Entities namespace> Now i am aware that it would be entirely possible to simply declare a 'BaseEntity' class from which all entities created within the MyProject.Models.Entities namespace will inherit from; public class BaseTest<T> where T : MyBaseEntity but...I dont actually need to, or want to. Plus I am using an ORM and mapping entities with inheritance, although possible, adds a layer of complexity that is not required. So, is it possible to constrain a generic class parameter to a namespace and not a specific type ? Thank you for your time.

    Read the article

  • Two entities with @ManyToOne should join the same table

    - by Ivan Yatskevich
    I have the following entities Student @Entity public class Student implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; //getter and setter for id } Teacher @Entity public class Teacher implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; //getter and setter for id } Task @Entity public class Task implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; @ManyToOne(optional = false) @JoinTable(name = "student_task", inverseJoinColumns = { @JoinColumn(name = "student_id") }) private Student author; @ManyToOne(optional = false) @JoinTable(name = "student_task", inverseJoinColumns = { @JoinColumn(name = "teacher_id") }) private Teacher curator; //getters and setters } Consider that author and curator are already stored in DB and both are in the attached state. I'm trying to persist my Task: Task task = new Task(); task.setAuthor(author); task.setCurator(curator); entityManager.persist(task); Hibernate executes the following SQL: insert into student_task (teacher_id, id) values (?, ?) which, of course, leads to null value in column "student_id" violates not-null constraint Can anyone explain this issue and possible ways to resolve it?

    Read the article

  • Blackberry Asynchronous HTTP Requests - How?

    - by Kai
    The app I'm working on has a self contained database. The only time I need HTTP request is when the user first loads the app. I do this by calling a class that verifies whether or not a local DB exists and, if not, create one with the following request: HttpRequest data = new HttpRequest("http://www.somedomain.com/xml", "GET", this); data.start(); This xml returns a list of content, all of which have images that I want to fetch AFTER the original request is complete and stored. So something like this won't work: HttpRequest data = new HttpRequest("http://www.somedomain.com/xml", "GET", this); data.start(); HttpRequest images = new HttpRequest("http://www.somedomain.com/xmlImages", "GET", this); images.start(); Since it will not treat this like an asynchronous request. I have not found much information on adding callbacks to httpRequest, or any other method I could use to ensure operation 2 does not execute until operation 1 is complete. Any help would be appreciated. Thanks

    Read the article

  • Locating SSL certificate, key and CA on server

    - by jovan
    Disclaimer: you don't need to know Node to answer this question but it would help. I have a Node server and I need to make it work with HTTPS. As I researched around the internet, I found that I have to do something like this: var fs = require('fs'); var credentials = { key: fs.readFileSync('path/to/ssl/private-key'), cert: fs.readFileSync('path/to/ssl/cert'), ca: fs.readFileSync('path/to/something/called/CA') }; var app = require('https').createServer(credentials, handler); I have several problems with this. First off, all the examples I found use completely different approaches. Some link to .pem files for both the certificate and key. I don't know what pem files are but I know my certificate is .crt and my key is .key. Some start off at the root folder and some seem to just have these .pem files in the application directory. I don't. Some use the ca thing too and some don't. This CA is supposed to be my domain's CA bundle according to some articles - but none explain where to find this file. In the ssl directory on my server I have one .crt file in the certs directory and one .key file in the keys directory, in addition to an empty csrs directory and an ssl.db file. So, where do I find these 3 files (key, cert, ca) and how do I link to them correctly?

    Read the article

  • Why does Rails screw up timezones when I am editing a resource?

    - by DJTripleThreat
    Steps to produce this: prompt>rails test_app prompt>cd test_app prompt>script/generate scaffold date_test my_date:datetime prompt>rake db:migrate now edit your app/views/date_tests/edit.html.erb: <h1>Editing date_test</h1> <% form_for(@date_test) do |f| %> <%= f.error_messages %> <p> RIGHT!<br/> <%= text_field_tag @date_test, f.object.my_date %> </p> <p> WRONG!<br /> <%= f.text_field :my_date %> </p> <p> <%= f.submit 'Update' %> </p> <% end %> <%= link_to 'Show', @date_test %> | <%= link_to 'Back', date_tests_path %> now edit your config/environment.rb: #add this config.time_zone = 'Central Time (US & Canada)' This recreates the problem I am having in my actual app. The problem with my app is that I'm storing a date in a hidden field and rendering a "user friendly" version. Creating a resource works fine but as soon as I try to edit it the time changes (it adds the difference between my current time zone configuration and UTC). go to http://localhost:3000/date_tests/new and save the time then go to reedit it and you will have two different representations of the date/time one which will save incorrectly and the other that will.

    Read the article

  • How to Update with LINQ?

    - by DaveDev
    currently, I'm doing an update similar to as follows, because I can't see a better way of doing it. I've tried suggestions that I've read in blogs but none work, such as http://msdn.microsoft.com/en-us/library/bb425822.aspx and http://weblogs.asp.net/scottgu/archive/2007/05/19/using-linq-to-sql-part-1.aspx Maybe these do work and I'm missing some point. Has anyone else had luck with them? // please note this isn't the actual code. // I've modified it to clarify the point I wanted to make // and also I didn't want to post our code here! public bool UpdateMyStuff(int myId, List<int> funds) { // get MyTypes that correspond to the ID I want to update IQueryable<MyType> myTypes = database.MyTypes.Where(xx => xx.MyType_MyId == myId); // delete them from the database foreach (db.MyType mt in myTypes) { database.MyTypes.DeleteOnSubmit(mt); } database.SubmitChanges(); // create a new row for each item I wanted to update, and insert it foreach (int fund in funds) { database.MyType mt = new database.MyType { MyType_MyId = myId, fund_id = fund }; database.MyTypes.InsertOnSubmit(mt); } // try to commit the insert try { database.SubmitChanges(); return true; } catch (Exception) { return false; throw; } } Unfortunately, there isn't a database.MyTypes.Update() method so I don't know a better way to do it. Can sombody suggest what I could do? Thanks. ..as a side note, why isn't there an Update() method?

    Read the article

  • Codeigniter - accessing variables from an array passed into a page.

    - by Matt
    Hello, I have a controller with an index function as follows: function index() { $this->load->model('products_model'); $data['product'] = $this->products_model->get(3); // 3 = product id $data['product_no'] = 3; $data['main_content'] = 'product_view'; //print_r($data['products']); $this->load->view('includes/template', $data); } This is the get function in the products_model file function get($id) { $results = $this->db->get_where('products', array('id' => $id))->result(); //get the first item $result = $results[0]; return $result; } The products table contains fields such as name, price etc. Please can you tell me how to output variables from $data['product'] after it is passed into the view? I have tried so many things but nothing is working, even though the print_r (commented out) shows the data - it is not being passed into the view. I thought it may have been because the view calls a template file which references the main_content variable: Template file contents: <?php $this->load->view('includes/header'); ?> <?php $this->load->view($main_content); ?> <?php $this->load->view('includes/footer'); ?> but i tried creating a flat view file and still couldn't access the variables. Many thanks,

    Read the article

  • Synchronizing one or more databases with a master database - Foreign keys

    - by Ikke
    I'm using Google Gears to be able to use an application offline (I know Gears is deprecated). The problem I am facing is the synchronization with the database on the server. The specific problem is the primary keys or more exactly, the foreign keys. When sending the information to the server, I could easily ignore the primary keys, and generate new ones. But then how would I know what the relations are. I had one sollution in mind, bet the I would need to save all the pk for every client. What is the best way to synchronize multiple client with one server db. Edit: I've been thinking about it, and I guess seqential primary keys are not the best solution, but what other possibilities are there? Time based doesn't seem right because of collisions which could happen. A GUID comes to mind, is that an option? It looks like generating a GUID in javascript is not that easy. I can do something with natural keys or composite keys. As I'm thinking about it, that looks like the best solution. Can I expect any problems with that?

    Read the article

  • How should I pass the translated text to my object in my multilingual application?

    - by boatingcow
    Up until now, I have maintained a 'dictionary' table in my database, for example: +-----------+---------------------------------------+-----------------------------------------------+--------+ | phrase | en | fr | etc... | +-----------+---------------------------------------+-----------------------------------------------+--------+ | generated | Generated in %1$01.2f seconds at %2$s | Créée en %1$01.2f secondes à %2$s aujourd'hui | ... | | submit | Submit... | Envoyer... | ... | +-----------+---------------------------------------+-----------------------------------------------+--------+ I'll then select all rows from the database for the column that matches the locale we're interested in (or read the cache from a file to speed db lookup) and dump the dictionary into an array called $lng. Then I'll have HTML helper objects like this in my view: $html->input(array('type' => 'submit', 'value' => $lng['submit'], etc...)); ... $html->div(array('value' => sprintf($lng['generated'], $generated, date('H:i')), etc...)); The translations can appear in PDF, XLS and AJAX responses too. The problem with my approach so far is that I now have loads of global $lng; in every class where there is a function that spits out UI code.. How do other people get the translation into the object? Is it one scenario where globals aren't actually that bad? Would it be madness to create a class with accessors when the dictionary terms are all static?

    Read the article

  • PHP OOP singleton doesn't return object

    - by Misiur
    Weird trouble. I've used singleton multiple times but this particular case just doesn't want to work. Dump says that instance is null. define('ROOT', "/"); define('INC', 'includes/'); define('CLS', 'classes/'); require_once(CLS.'Core/Core.class.php'); $core = Core::getInstance(); var_dump($core->instance); $core->settings(INC.'config.php'); $core->go(); Core class class Core { static $instance; public $db; public $created = false; private function __construct() { $this->created = true; } static function getInstance() { if(!self::$instance) { self::$instance = new Core(); } else { return self::$instance; } } public function settings($path = null) { ... } public function go() { ... } } Error code Fatal error: Call to a member function settings() on a non-object in path It's possibly some stupid typo, but I don't have any errors in my editor. Thanks for the fast responses as always.

    Read the article

  • problems with retrieving data that was saved outside of a grails webflow

    - by callie16
    Hi, this is actually connected to an earlier question of mine here. Anyway, I have 3 domains that look like this: class A { ... static hasMany = [ b : B ] ... } class B { ... static belongsTo = [ a : A ] static hasMany = [ c : C ] ... } class C { ... static belongsTo = [ b : B ] ... } In my GSP page, I call an action in the Controller via a remote function in a javascript block (I'm using Dojo so I'm passing data to be saved this way... it's not a form per se so I use JSON for now to pass the data to the Controller). Let's say, I'm calling something like this: def someAction = { def jsonArr = [parse the JSON here] def tmpA = A.get(params.id) ... def tmpB = new B() b.someParam = jsonArr.someParam ... def tmpC = new C() tmpC.cParam = jsonArr.cParam tmpB.addToC(tmpC) tmpB.save(flush: true) //this may or may not be here but I'm adding it for the sake of completeness tmpA.addToB(tmpB) tmpA.save(flush: true) // NOTE: If I check here via println or whatnot, tmpA has a tmpB which has a tmpC... in other words, the data got saved. It's also in the DB. redirect(action: 'order' ...) } Then comes the fun part. Here's the webflow sample: def orderFlow = { ... someStateIShouldEndUpIn { on("next") { // or on previous... doesn't matter def anId = params.id def currA = A.get(anId) // this does NOT return a null value def testB = currA.b // this DOES return a null value }.to("somePage") ... } ... } Any ideas on why this happens? Moreover, when I dump the data of currA, b=null... instead of b=[] or b=[contents of tmpB]. Any help would be seriously appreciated... been at this for a couple of days now... Thanks!

    Read the article

  • SQL Server log backups "stalling"

    - by MattK
    I have interited a box running SQL Server 2008 and Windows 2003, and have had a few events where largeish (35GB) log backups "stall", both before and after the installation of SQL 2008 SP1. The server log ships to a standby, so regular log backups are taken at 15 minute intervals. However, after an index reorg causes the log to grow to about 35GB (on a DB with about 17GB of data), the next log backup runs to ~95% completion, then seems to stop. The process shows as suspended, with a wait state of BACKUPIO. CPU, read, and write activity on the SPID also does not change, and the process stays in this state for hours, when normally a backup of this size should complete in about 20 minutes. This server has a single RAID-1 volume, thus the source database files and destination backup files are on the same volume. However, I cannot determine if another process is blocking the backup. The backup SPID cannot be killed, and the only way to terminate the log backup and clear the lock on the backup file is to cycle the SQL Server service. There was one event where the backup terminated completely, with an error that another process had locked the backup file, but no details about what that process was. Can anyone suggest a cause or diagnostic process to this situation?

    Read the article

  • Excessive use of Inner Join for more than 3 tables

    - by Archangel08
    Good Day, I have 4 tables on my DB (not the actual name but almost similar) which are the ff: employee,education,employment_history,referrence employee_id is the name of the foreign key from employee table. Here's the example (not actual) data: **Employee** ID Name Birthday Gender Email 1 John Smith 08-15-2014 Male [email protected] 2 Jane Doe 00-00-0000 Female [email protected] 3 John Doe 00-00-0000 Male [email protected] **Education** Employee_ID Primary Secondary Vocation 1 Westside School Westshore H.S SouthernBay College 2 Eastside School Eastshore H.S NorthernBay College 3 Northern School SouthernShore H.S WesternBay College **Employment_History** Employee_ID WorkOne StartDate Enddate 1 StarBean Cafe 12-31-2012 01-01-2013 2 Coffebucks Cafe 11-01-2012 11-02-2012 3 Latte Cafe 01-02-2013 04-05-2013 Referrence Employee_ID ReferrenceOne Address Contact 1 Abraham Lincoln Lincoln Memorial 0000000000 2 Frankie N. Stein Thunder St. 0000000000 3 Peter D. Pan Neverland Ave. 0000000000 NOTE: I've only included few columns though the rest are part of the query. And below are the codes I've been working on for 3 consecutive days: $sql=mysql_query("SELECT emp.id,emp.name,emp.birthday,emp.pob,emp.gender,emp.civil,emp.email,emp.contact,emp.address,emp.paddress,emp.citizenship,educ.employee_id,educ.elementary,educ.egrad,educ.highschool,educ.hgrad,educ.vocational,educ.vgrad,ems.employee_id,ems.workOne,ems.estartDate,ems.eendDate,ems.workTwo,ems.wstartDate,ems.wendDate,ems.workThree,ems.hstartDate,ems.hendDate FROM employee AS emp INNER JOIN education AS educ ON educ.employee_id='emp.id' INNER JOIN employment_history AS ems ON ems.employee_id='emp.id' INNER JOIN referrence AS ref ON ref.employee_id='emp.id' WHERE emp.id='$id'"); Is it okay to use INNER JOIN this way? Or should I modify my query to get the results that I wanted? I've also tried to use LEFT JOIN but still it doesn't return anything .I didn't know where did I go wrong. You see, as I have thought, I've been using the INNER JOIN in correct manner, (since it was placed before the WHILE CLAUSE). So I couldn't think of what could've possible went wrong. Do you guys have a suggestion? Thanks in advance.

    Read the article

< Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >