Search Results

Search found 23645 results on 946 pages for 'oracle berkeley db'.

Page 489/946 | < Previous Page | 485 486 487 488 489 490 491 492 493 494 495 496  | Next Page >

  • IRM Item Codes &ndash; what are they for?

    - by martin.abrahams
    A number of colleagues have been asking about IRM item codes recently – what are they for, when are they useful, how can you control them to meet some customer requirements? This is quite a big topic, but this article provides a few answers. An item code is part of the metadata of every sealed document – unless you define a custom metadata model. The item code is defined when a file is sealed, and usually defaults to a timestamp/filename combination. This time/name combo tends to make item codes unique for each new document, but actually item codes are not necessarily unique, as will become clear shortly. In most scenarios, item codes are not relevant to the evaluation of a user’s rights - the context name is the critical piece of metadata, as a user typically has a role that grants access to an entire classification of information regardless of item code. This is key to the simplicity and manageability of the Oracle IRM solution. Item codes are occasionally exposed to users in the UI, but most users probably never notice and never care. Nevertheless, here is one example of where you can see an item code – when you hover the mouse pointer over a sealed file. As you see, the item code for this freshly created file combines a timestamp with the file name. But what are item codes for? The first benefit of item codes is that they enable you to manage exceptions to the policy defined for a context. Thus, I might have access to all oracle – internal files - except for 2011_03_11 13:33:29 Board Minutes.sdocx. This simple mechanism enables Oracle IRM to provide file-by-file control where appropriate, whilst offering the scalability and manageability of classification-based control for the majority of users and content. You really don’t want to be managing each file individually, but never say never. Item codes can also be used for the opposite effect – to include a file in a user’s rights when their role would ordinarily deny access. So, you can assign a role that allows access only to specified item codes. For example, my role might say that I have access to precisely one file – the one shown above. So how are item codes set? In the vast majority of scenarios, item codes are set automatically as part of the sealing process. The sealing API uses the timestamp and filename as shown, and the user need not even realise that this has happened. This automatically creates item codes that are for all practical purposes unique - and that are also intelligible to users who might want to refer to them when viewing or assigning rights in the management UI. It is also possible for suitably authorised users and applications to set the item code manually or programmatically if required. Setting the item code manually using the IRM Desktop The manual process is a simple extension of the sealing task. An authorised user can select the Advanced… sealing option, and will see a dialog that offers the option to specify the item code. To see this option, the user’s role needs the Set Item Code right – you don’t want most users to give any thought at all to item codes, so by default the option is hidden. Setting the item code programmatically A more common scenario is that an application controls the item code programmatically. For example, a document management system that seals documents as part of a workflow might set the item code to match the document’s unique identifier in its repository. This offers the option to tie IRM rights evaluation directly to the security model defined in the document management system. Again, the sealing application needs to be authorised to Set Item Code. The Payslip Scenario To give a concrete example of how item codes might be used in a real world scenario, consider a Human Resources workflow such as a payslips. The goal might be to allow the HR team to have access to all payslips, but each employee to have access only to their own payslips. To enable this, you might have an IRM classification called Payslips. The HR team have a role in the normal way that allows access to all payslips. However, each employee would have an Item Reader role that only allows them to access files that have a particular item code – and that item code might match the employee’s payroll number. So, employee number 123123123 would have access to items with that code. This shows why item codes are not necessarily unique – you can deliberately set the same code on many files for ease of administration. The employees might have the right to unseal or print their payslip, so the solution acts as a secure delivery mechanism that allows payslips to be distributed via corporate email without any fear that they might be accessed by IT administrators, or forwarded accidentally to anyone other than the intended recipient. All that remains is to ensure that as each user’s payslip is sealed, it is assigned the correct item code – something that is easily managed by a simple IRM sealing application. Each month, an employee’s payslip is sealed with the same item code, so you do not need to keep amending the list of items that the user has access to – they have access to all documents that carry their employee code.

    Read the article

  • Pear MDB2 class and raiserror exceptions in MSSQL

    - by drholzmichl
    Hi, in MSSQL it's possible to raise an error with raiserror(). I want to use a severity, which doesn't interrupt the connection. This error is raised in a stored procedure. In SQL Management Studio all is fine and I get my error code when executing this SP. But when trying to execute this SP via MDB2 in PHP5 this doesn't work. All I get is an empty array. MDB2 object is created via (including needed options): $db =& MDB2::connect($dsn); $db->setFetchMode(MDB2_FETCHMODE_ASSOC); $db->setOption('portability',MDB2_PORTABILITY_ALL ^ MDB2_PORTABILITY_EMPTY_TO_NULL); The following works (I get a PEAR error): $db->query("RAISERROR('test',11,0);"); But when calling a stored procedure which raises this error via $db->query("EXEC sp_raise_error"); there is not output. What's wrong?

    Read the article

  • appcfg.py upload_data entity kind problem

    - by Dingo
    Hi, I am developing application on app-engine-path and I would like to upload some data to datastore. For example I have a model models/places.py: class Place(db.Model): name = db.StringProperty() longitude = db.FloatProperty() latitude = db.FloatProperty() If I save this in view, kind() of this entity is "models_place". All is ok, Place.all() in view work fine. But: If I upload some next row using appcfg.py upload_data, the kind() of this entities is Place. loader.py look like this: import datetime, os, sys from google.appengine.ext import db from google.appengine.tools import bulkloader libs_path = os.path.join("/home/martin/myproject/src/") if libs_path not in sys.path: sys.path.insert(0, libs_path) from models import places class AlbumLoader(bulkloader.Loader): def __init__(self): bulkloader.Loader.__init__(self, 'Place', [('name', lambda x: x.decode('utf-8')), ('longitude', float), ('latitude', float), ]) loaders = [AlbumLoader] and command for uploading: python /usr/local/google_appengine/appcfg.py upload_data --config_file=places_loader.py --kind=models_place --filename=data/places.csv --url=http://localhost:8000/remote_api /home/martin/myproject/src/

    Read the article

  • How to generate Doctrine models/classes that extend a custom record class

    - by Shane O'Grady
    When I use Doctrine to generate classes from Yaml/db each Base class (which includes the table definition) extends the Doctrine_Record class. Since my app uses a master and (multiple) slave db servers I need to be able to make the Base classes extend my custom record class to force writes to go to the master db server (as described here). However if I change the base class manually I lose it again when I regenerate my classes from Yaml/db using Doctrine. I need to find a way of telling Doctrine to extend my own Base class, or find a different solution to a master/slave db setup using Doctrine. Example generated model: abstract class My_Base_User extends Doctrine_Record { However I need it to be automatically generated as: abstract class My_Base_User extends My_Record { I am using Doctrine 1.2.1 in a new Zend Framework 1.9.6 application if it makes any difference.

    Read the article

  • Transforming a string to a valid PDO_MYSQL DSN

    - by Alix Axel
    What is the most concise way to transform a string in the following format: mysql:[/[/]][user[:pass]@]host[:port]/db[/] Into a usuable PDO connection/instance (using the PDO_MYSQL DSN), some possible examples: $conn = new PDO('mysql:host=host;dbname=db'); $conn = new PDO('mysql:host=host;port=3307;dbname=db'); $conn = new PDO('mysql:host=host;port=3307;dbname=db', 'user'); $conn = new PDO('mysql:host=host;port=3307;dbname=db', 'user', 'pass'); I've been trying some regular expressions (preg_[match|split|replace]) but they either don't work or are too complex, my gut tells me this is not the way to go but nothing else comes to my mind. Any suggestions?

    Read the article

  • sqlcipher command line not working

    - by Min Lin
    I have a encrypted sqlite db and its key. (Which is generated by an android program). However, when I open the db in command line I can not read the db. The command line tool is installed by: brew install sqlcipher I open the database by: sqlcipher EnDB.db >pragma key="6b74fcd"; >select * from bizinfo; It keeps telling me "Error: file is encrypted or is not a database" However, if I open the database file with gui app sqlite database browser (which is a windows program and I run it in wine). It pops up a window for me to enter the key, with 6b74fcd as the key it successfully read the database. As I want to automatically process the db in the future, I can not depend on the GUI. Do you know why the command line is not working?

    Read the article

  • How to insert/update multiple record into SQLite database in a single query.

    - by TuanCM
    Hi Guy. Is it possible to insert/update multiple record in SQLite database using EGODatabase wrapper. If I'm correct I think we can do it with FMDatabase by wrapping it between [db beginTransaction] and [db commit]. I wonder if we can do the same thing by using EGODatabase. Following is the code sample from FMDatabase project: [db beginTransaction]; i = 0; while (i++ < 20) { [db executeUpdate:@"insert into test (a, b, c, d, e) values (?, ?, ?, ?, ?)" , @"hi again'", // look! I put in a ', and I'm not escaping it! [NSString stringWithFormat:@"number %d", i], [NSNumber numberWithInt:i], [NSDate date], [NSNumber numberWithFloat:2.2f]]; } [db commit];

    Read the article

  • Can I specify the order of how changes happen in an single App Engine transaction ? Is it equal to t

    - by indiehacker
    If I passed a list of key ids as an argument in a transaction, would the change associated with the first key in the list happen first? And if not, how do I specify the order that I want the changes to happen in? As a concrete example, consider this code below from Google Docs Transactions--would changes to the first item in acc.key() happen first? class Accumulator(db.Model): counter = db.IntegerProperty() Docshttp://code.google.com/appengine/docs/python/datastore/transactions.html: def increment_counter(key, amount): obj = db.get(key) obj.counter += amount obj.put() q = db.GqlQuery("SELECT * FROM Accumulator") acc = q.get() db.run_in_transaction(increment_counter, acc.key(), 5)

    Read the article

  • Can SQLAlchemy DateTime Objects Only Be Naive?

    - by Sean M
    I am working with SQLAlchemy, and I'm not yet sure which database I'll use under it, so I want to remain as DB-agnostic as possible. How can I store a timezone-aware datetime object in the DB without tying myself to a specific database? Right now, I'm making sure that times are UTC before I store them in the DB, and converting to localized at display-time, but that feels inelegant and brittle. Is there a DB-agnostic way to get a timezone-aware datetime out of SQLAlchemy instead of getting naive datatime objects out of the DB?

    Read the article

  • Pear MDB2 class and raiserror exceptions in SQL Server

    - by drholzmichl
    Hi, in SQL Server it's possible to raise an error with raiserror(). I want to use a severity, which doesn't interrupt the connection. This error is raised in a stored procedure. In SQL Management Studio all is fine and I get my error code when executing this SP. But when trying to execute this SP via MDB2 in PHP5 this doesn't work. All I get is an empty array. MDB2 object is created via (including needed options): $db =& MDB2::connect($dsn); $db->setFetchMode(MDB2_FETCHMODE_ASSOC); $db->setOption('portability',MDB2_PORTABILITY_ALL ^ MDB2_PORTABILITY_EMPTY_TO_NULL); The following works (I get a PEAR error): $db->query("RAISERROR('test',11,0);"); But when calling a stored procedure which raises this error via $db->query("EXEC sp_raise_error"); there is not output. What's wrong?

    Read the article

  • LinqToSql: insert instead of update

    - by Christina Mayers
    I am stuck with this problems for a long time now. Everything I try to do is insert a row in my DB if it's new information - if not update the existing one. I've updated many entities in my life before - but what's wrong with this code is beyond me (probably something pretty basic) I guess I can't see the wood for the trees... private Models.databaseDataContext db = new Models.databaseDataContext(); internal void StoreInformations(IEnumerable<EntityType> iEnumerable) { foreach (EntityType item in iEnumerable) { EntityType type = db.EntityType.Where(t => t.Room == iEnumerable.Room).FirstOrDefault(); if (type == null) { db.EntityType.InsertOnSubmit(item); } else { cur.Date = item.Date; cur.LastUpdate = DateTime.Now(); cur.End = item.End; } } } internal void Save() { db.SubmitChanges(); }

    Read the article

  • linq to sql using foreign keys returning iqueryable(of myEntity]

    - by Gern Blandston
    I'm trying to use Linq to SQL to return an IQueryable(of Project) when using foreign key relationships. Using the below schema, I want to be able to pass in a UserId and get all the projects created for the company the user is associated with. DB tables: Projects Projid ProjCreator FK (UserId from UserInfo table) Companyid FK (CompanyID from Companies table) UserInfo UserID PK Companyid FK Companies CompanyId PK Description I can get the iqueryable(of project) when simply getting the ProjectCreator with this: Return (From p In db.Projects _ Where p.ProjectCreator = Me.UserId) But I'm having trouble getting the syntax to get a iqueryable(of projects) when using foreign keys. Below gives me an IQueryable(of anonymous) but I can't seem to convince it to give me an IQueryable(of project) even if I try to cast it: Dim retval = (From p In db.Projects _ Join c In db.Companies On p.CompanyId Equals c.CompanyId _ Join u In db.UserInfos On u.CompanyId Equals c.CompanyId _ Where u.Login = UserId)

    Read the article

  • What is the proper location for a sqlite3 database file?

    - by Elliot Chen
    Hi, Everyone: I'm using a sqlite3 database to store app's data. Instead of building a database within program, I introduced an existing db file: 'abc.sqlite' into my project and put it under my 'Resources' folder. So, I think this db file should be inside of 'bundle', so at my init function, I used following statement to read it out: NSString *path = [[NSBundle mainBundle] pathForResource:@"abc" ofType:"sqlite"]; if(sqlite3_open([path UTF8String], &database) != SQLITE_OK) ... It's ok that this db can be opened and data can be retrieved from it. BUT, someone told me that it's better to copy this db file into user folder: such as 'Document'. So, my question is: is it ok to use this db from main bundle directly or copy it to user folder then use that copy. Which is better? Thank you very much!

    Read the article

  • Bulletproof way to DROP and CREATE a database under Continuous Integration.

    - by H. Abraham Chavez
    I am attempting to drop and recreate a database from my CI setup. But I'm finding it difficult to automate the dropping and creation of the database, which is to be expected given the complexities of the db being in use. Sometimes the process hangs, errors out with "db is currently in use" or just takes too long. I don't care if the db is in use, I want to kill it and create it again. Does some one have a straight shot method to do this? alternatively does anyone have experience dropping all objects in the db instead of dropping the db itself? USE master --Create a database IF EXISTS(SELECT name FROM sys.databases WHERE name = 'mydb') BEGIN ALTER DATABASE mydb SET SINGLE_USER --or RESTRICTED_USER --WITH ROLLBACK IMMEDIATE DROP DATABASE uAbraham_MapSifterAuthority END CREATE DATABASE mydb;

    Read the article

  • Content in Context: The right medicine for your business applications

    - by Lance Shaw
    For many of you, your companies have already invested in a number of applications that are critical to the way your business is run. HR, Payroll, Legal, Accounts Payable, and while they might need an upgrade in some cases, they are all there and handling the lifeblood of your business. But are they really running as efficiently as they could be? For many companies, the answer is no. The problem has to do with the important information caught up within documents and paper. It’s everywhere except where it truly needs to be – readily available right within the context of the application itself. When the right information cannot be easily found, business processes suffer significantly. The importance of this recently struck me when I recently went to meet my new doctor and get a routine physical. Walking into the office lobby, I couldn't help but notice rows and rows of manila folders in racks from floor to ceiling, filled with documents and sensitive, personal information about various patients like myself.  As I looked at all that paper and all that history, two things immediately popped into my head.  “How do they find anything?” and then the even more alarming, “So much for information security!” It sure looked to me like all those documents could be accessed by anyone with a key to the building. Now the truth is that the offices of many general practitioners look like this all over the United States and the world.  But it had me thinking, is the same thing going on in just about any company around the world, involving a wide variety of important business processes? Probably so. Think about all the various processes going on in your company right now. Invoice payments are being processed through Accounts Payable, contracts are being reviewed by Procurement, and Human Resources is reviewing job candidate submissions and doing background checks. All of these processes and many more like them rely on access to forms and documents, whether they are paper or digital. Now consider that it is estimated that employee’s spend nearly 9 hours a week searching for information and not finding it. That is a lot of very well paid employees, spending more than one day per week not doing their regular job while they search for or re-create what already exists. Back in the doctor’s office, I saw this trend exemplified as well. First, I had to fill out a new patient form, even though my previous doctor had transferred my records over months previously. After filling out the form, I was later introduced to my new doctor who then interviewed me and asked me the exact same questions that I had answered on the form. I understand that there is value in the interview process and it was great to meet my new doctor, but this simple process could have been so much more efficient if the information already on file could have been brought directly together with the new patient information I had provided. Instead of having a highly paid medical professional re-enter the same information into the records database, the form I filled out could have been immediately scanned into the system, associated with my previous information, discrepancies identified, and the entire process streamlined significantly. We won’t solve the health records management issues that exist in the United States in this blog post, but this example illustrates how the automation of information capture and classification can eliminate a lot of repetitive and costly human entry and re-creation, even in a simple process like new patient on-boarding. In a similar fashion, by taking a fresh look at the various processes in place today in your organization, you can likely spot points along the way where automating the capture and access to the right information could be significantly improved. As you evaluate how content-process flows through your organization, take a look at how departments and regions share information between the applications they are using. Business applications are often implemented on an individual department basis to solve specific problems but a holistic approach to overall information management is not taken at the same time. The end result over the years is disparate applications with separate information repositories and in many cases these contain duplicate information, or worse, slightly different versions of the same information. This is where Oracle WebCenter Content comes into the story. More and more companies are realizing that they can significantly improve their existing application processes by automating the capture of paper, forms and other content. This makes the right information immediately accessible in the context of the business process and making the same information accessible across departmental systems which has helped many organizations realize significant cost savings. Here on the Oracle WebCenter team, one of our primary goals is to help customers find new ways to be more effective, more cost-efficient and manage information as effectively as possible. We have a series of three webcasts occurring over the next few weeks that are focused on the integration of enterprise content management within the context of business applications. We hope you will join us for one or all three and that you will find them informative. Click here to learn more about these sessions and to register for them. There are many aspects of information management to consider as you look at integrating content management within your business applications. We've barely scratched the surface here but look for upcoming blog posts where we will discuss more specifics on the value of delivering documents, forms and images directly within applications like Oracle E-Business Suite, PeopleSoft Enterprise, JD Edwards Enterprise One, Siebel CRM and many others. What do you think?  Are your important business processes as healthy as they can be?  Do you have any insights to share on the value of delivering content directly within critical business processes? Please post a comment and let us know the value you have realized, the lessons learned and what specific areas you are interested in.

    Read the article

  • Best performance approach to history mechanism?

    - by Royi Namir
    We are going to create History Mechanism for our changes in DB (DART in pic) via Triggers. we have 600 tables. Each record that will be changed - the trigger will insert the deleted one into XXX. regarding to the XXX : option 1 : clone each table in "Dart" DB and each table now will have a "sister table" e.g. : Table1 will have Table1_History problems : we will have 1200 tables programmer can do mistakes by working on wrong tables... option 2 : make a new DB (DART_2005 in pic) and the history tables will be there option 3 : use linked server which stores the Db which will contain the history tables. question : 1) which option gives the best performance ( I guess 3 is not - but is it 1 or 2 or same ?) 2) Does option 2 is acting like "linked server" ( in queries we will need to select from both DB's...) 3) What is the best practice approach ?

    Read the article

  • What should be proper location for sqlit3 database file?

    - by Elliot Chen
    Hi, Everyone: I'm using a sqlite3 database to store app's data. Instead of building a database within program, I introduced an existing db file: 'abc.sqlite' into my project and put it under my 'Resources' folder. So, I think this db file should be inside of 'bundle', so at my init function, I used following statement to read it out: NSString *path = [[NSBundle mainBundle] pathForResource:@"abc" ofType:"sqlite"]; if(sqlite3_open([path UTF8String], &database) != SQLITE_OK) ... It's ok that this db can be opened and data can be retrieved from it. BUT, someone told me that it's better to copy this db file into user folder: such as 'Document'. So, my question is: is it ok to use this db from main bundle directly or copy it to user folder then use that copy. Which is better? Thank you very much!

    Read the article

  • Counts of events grouped by date in python?

    - by Sologoub
    This is no doubt another noobish question, but I'll ask it anyways: I have a data set of events with exact datetime in UTC. I'd like to create a line chart showing total number of events by day (date) in the specified date range. Right now I can retrieve the total data set for the needed date range, but then I need to go through it and count up for each date. The app is running on google app engine and is using python. What is the best way to create a new data set showing date and corresponding counts (including if there were no events on that date) that I can then use to pass this info to a django template? Data set for this example looks like this: class Event(db.Model): event_name = db.StringProperty() doe = db.DateTimeProperty() dlu = db.DateTimeProperty() user = db.UserProperty() Ideally, I want something with date and count for that date. Thanks and please let me know if something else is needed to answer this question!

    Read the article

  • Java Prepared Statement Error

    - by Suresh S
    Hi Guys the following code throws me an error i have an insert statement created once and in the while loop i am dynamically setting parameter , and at the end i says ps2.addBatch() again while ( (eachLine = in.readLine()) != null)) { for (int k=stat; k <=45;k++) { ps2.setString (k,main[(k-2)]); } stat=45; for (int l=1;l<= 2; l++) { ps2.setString((stat+l),pdp[(l-1)]);// Exception } ps2.addBatch(); } This is the error java.lang.ArrayIndexOutOfBoundsException: 45 at oracle.jdbc.dbaccess.DBDataSetImpl._getDBItem(DBDataSetImpl.java:378) at oracle.jdbc.dbaccess.DBDataSetImpl._createOrGetDBItem(DBDataSetImpl.java:781) at oracle.jdbc.dbaccess.DBDataSetImpl.setBytesBindItem(DBDataSetImpl.java:2450) at oracle.jdbc.driver.OraclePreparedStatement.setItem(OraclePreparedStatement.java:1155) at oracle.jdbc.driver.OraclePreparedStatement.setString(OraclePreparedStatement.java:1572) at Processor.main(Processor.java:233)

    Read the article

  • Ruby w/ Postgres & Sinatra - Query won't order right with parameter??

    - by alleywayjack
    So I set a variable in my main ruby file that's handling all my post and get requests and then use ERB templates to actually show the pages. I pass the database handler itself into the erb templates, and then run a query in the template to get all (for this example) grants. In my main ruby file: grants_main_order = "id_num" get '/grants' do erb :grants, :locals => {:db=>db, :order=>grants_main_order, :message=>params[:message]} end In the erb template: db = locals[:db] getGrants = db.exec("SELECT * FROM grants ORDER BY $1", [locals[:order]]) This produces some very random ordering, however if I replace the $1 with id_num, it works as it should. Is this a typing issue? How can I fix this? Using string replacement with #{locals[:order]} also gives funky results.

    Read the article

  • PHP / Zend Framework: Which object would handle a complex table join?

    - by Thomas
    I think one of the more difficult concepts to understand in the Zend Framework is how the Table Data Gateway pattern is supposed to handle multi-table joins. Most of the suggestions I've seen claim that you simply handle the joins using a $db-select()... Zend DB Select with multiple table joins Joining Tables With Zend Framework PHP Joining tables wthin a model in Zend Php Zend Framework Db Select Join table help Zend DB Select with multiple table joins My question is: Which object is best suited to handle this kind of multi-table select statement? I feel like putting it in the model would break the 1-1 Table Data Gateway pattern between the class and the db table. Yet putting it in the controller seems wrong because why would a controller handle a SQL statement? Anyway, I feel like ZF makes handling datasets from multiple tables more difficult than it needs to be. Any help you can provide is great... Thanks!

    Read the article

  • Custom constructors for models in Google App Engine (python)

    - by Nikhil Chelliah
    I'm getting back to programming for Google App Engine and I've found, in old, unused code, instances in which I wrote constructors for models. It seems like a good idea, but there's no mention of it online and I can't test to see if it works. Here's a contrived example, with no error-checking, etc.: class Dog(db.Model): name = db.StringProperty(required=True) breeds = db.StringListProperty() age = db.IntegerProperty(default=0) def __init__(self, name, breed_list, **kwargs): db.Model.__init__(**kwargs) self.name = name self.breeds = breed_list.split() rufus = Dog('Rufus', 'spaniel terrier labrador') rufus.put() The **kwargs are passed on to the Model constructor in case the model is constructed with a specified parent or key_name, or in case other properties (like age) are specified. This constructor differs from the default in that it requires that a name and breed_list be specified (although it can't ensure that they're strings), and it parses breed_list in a way that the default constructor could not. Is this a legitimate form of instantiation, or should I just use functions or static/class methods? And if it works, why aren't custom constructors used more often?

    Read the article

  • How to create relationship mapping via Entity framework

    - by James
    I have following domain model: User { int Id; } City { int Id; } UserCity { int UserId, int CityId, dateTime StartDate } In the function where I have to attach a user to a city, the following code is working for me: UserCity uc = new UserCity(); //This is a db hit uc.User = MyEntityFrameworkDBContext.User.FirstOrDefault(u => u.ID == currentUserId); //this is a db hit uc.City = MyEntityFrameworkDBContext.City.FirstOrDefault(c => c.ID == currentCityId); uc.StartDate = userCityStartDate; //this is a db hit MyEntityFrameworkDBContext.SaveChanges(); Is there any way I can create relationships with just one single DB hit? The first two db hits are not required, actually.

    Read the article

  • I'm having a hard time wrapping my head around handling the Activity Lifecycle...

    - by kefs
    So it seems i've created a fatal flaw and coded an app before understanding/handling rotation/lifecycle events.. Currently, all of my code is in onCreate of each activity. I've read a lot of lifecycle tutorials online, including the official dev guide info, but i'm still having an almost unbelievably hard time trying to wrap my head around the rotation/lifecycle events/methods and when to use them correctly. For example, my app currently has an activity that opens the db, inserts a record, then closes the db.. if i rotate my screen on this activity, the data is re-entered into the db. Using the available lifecycle events (onPause(), onResume(), etc..), how would I prevent this db call from happening again? Would I have to pass a variable through the state saying that the db call has been done, and not to do it again? Thanks in advance..

    Read the article

  • SQL use comma-separated values with IN clause

    - by user342944
    I am developing an ASP.NET application and passing a string value like "1,2,3,4" into a procedure to select those values which are IN (1,2,3,4) but its saying "Conversion failed when converting the varchar value '1,2,3,4' to data type int." Here is the aspx code: private void fillRoles() { /*Read in User Profile Data from database */ Database db = DatabaseFactory.CreateDatabase(); DbCommand cmd = db.GetStoredProcCommand("sp_getUserRoles"); db.AddInParameter(cmd, "@pGroupIDs", System.Data.DbType.String); db.SetParameterValue(cmd, "@pGroupIDs", "1,2,3,4"); IDataReader reader = db.ExecuteReader(cmd); DropDownListRole.DataTextField = "Group"; DropDownListRole.DataValueField = "ID"; while (reader.Read()) { DropDownListRole.Items.Add((new ListItem(reader[1].ToString(), reader[0].ToString()))); } reader.Close(); } Here is my procedure: CREATE Procedure [dbo].[sp_getUserRoles](@pGroupIDs varchar(50)) AS BEGIN SELECT * FROM CheckList_Groups Where id in (@pGroupIDs) END

    Read the article

< Previous Page | 485 486 487 488 489 490 491 492 493 494 495 496  | Next Page >