Search Results

Search found 34093 results on 1364 pages for 'database architecture'.

Page 270/1364 | < Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >

  • Prepare and import data into existing database

    - by Álvaro G. Vicario
    I maintain a PHP application with SQL Server backend. The DB structure is roughly this: lot === lot_id (pk, identify) lot_code building ======== buildin_id (pk, identity) lot_id (fk) inspection ========== inspection_id (pk, identify) building_id (fk) date inspector result The database already has lots and buildings and I need to import some inspections. Key points are: It's a one-time initial load. Data comes in an Excel file. The Excel data is unaware of DB autogenerated IDs: inspections must be linked to buildings through their lot_code What are my options to do such data load? date inspector result lot_code ========== =========== ======== ======== 31/12/2009 John Smith Pass 987654X 28/02/2010 Bill Jones Fail 123456B

    Read the article

  • Strategies for Error Handling in .NET Web Services

    - by Jarrod
    I have a fairly substantial library of web services built in .NET that I use as a data model for our company web sites. In most .NET applications I use the Global ASAX file for profiling, logging, and creating bug reports for all exceptions thrown by the application. Global ASAX isn't available for web services so I'm curious as to what other strategies people have come up with to work around this limitation. Currently I just do something along these lines: <WebMethod()> _ Public Function MyServiceMethod(ByVal code As Integer) As String Try Return processCode(code) Catch ex As Exception CustomExHandler(ex) 'call a custom function every time to log exceptions Return errorObject End Try End Function Anybody have a better way of doing things besides calling a function inside the Catch?

    Read the article

  • Is writing eSQL database independant or not?

    - by Robert Koritnik
    Using EF we can use LINQ to read data which is rather simple (especialy using fluent calls), but we have less control unless we write eSQL on our own. Is writing eSQL database actually data store independant code? So if we decide to change data store, can the same statements still be used? Is writing eSQL strings in your code pose any serious security threats similar to writing TSQL statements in plain strings? So we moved to SPs. Could we still mode eSQL scripts outside of code as well and use some other technique to make them a bit more secure?

    Read the article

  • Shopping Cart Database Structure

    - by Paul Atkins
    Hi, I have been studying the database structure for shopping carts and notice that when storing order details the product information is repeated and stored again in the table. I was wondering what the reasoning behind this would be? Here is a small example of what i mean: Product Table product_id name desc price 1 product 1 This is product 1 27.00 Order Table order_id customer id order_total 1 3 34.99 Order Details Table order_details_id product_id product name price qty 1 1 product 1 27.00 1 So as you can see the product name and price are stored again in the order details table. Why is this? The only reason i can think of is because the product details may change after the order has been placed which may cause confusion. Is this correct? Thanks Paul

    Read the article

  • Ideal way to deliver large data over Web Services

    - by zengr
    We are trying to design 6 web services, which will serve another client component. The client component requires data from the web service we are implementing. Now, the problem is, there is not 1 WS we are implementing, there is one WS which the client component hits, this initiates a series (5 more) of WSs which gather data from their respective data stores and finally provide the data back to the original WS, which then delivers the data back to the client component. So, if the requested data becomes huge, then, this will be a serious problem for our internal communication channel. So, what do you guys suggest? What can be done to avoid overloading of the communication channel between the internal WS and at the same time, also delivering the data to the client component.

    Read the article

  • Subversion (20014)Internal error: database is locked on NFS

    - by Niraj Gurjar
    i have subversion setup using apache and DAV. OS is RHEL 4. Repository is created on NFS server mounted on this machine. when i try to access this repository i get following error in apache logs (20014)Internal error: database is locked Could not fetch resource information. [500, #0] Could not open the requested SVN filesystem [500, #200030] Could not open the requested SVN filesystem [500, #200030] The URI does not contain the name of a repository. [403, #190001] i did 'chmod' on that mounted partition but problem still persists. any help?

    Read the article

  • UML interface: URL iframe integration

    - by Bernd
    I have two applications, A and B, both with a web-based user interface. Both applications are integrated via an URL iframe mechanism. A user can click on a link in application A and then gets the UI of application B as am iframe in application A. Now, since both applications have an interface between each other (do they?): Who provides the interface and who requires the interface, in the UML sense? What is the main information flow on this interface?

    Read the article

  • Voting software with remote units - architectural questions

    - by David Neale
    I'm looking at designing some software that registers live votes (let's say A,B,C or D). The vote needs to be picked up and processed by a .NET engine. The remote voting units should be as small as possible. What form of data transmission should be used for the voting? The data is obviously very simple but there is a need to make sure each unit can only vote once per question. How would the data be received by the computer running the software?

    Read the article

  • help finding a hosing company with unixODBC and FreeTDS support

    - by patrick
    I need to find a hosting company that provides a LAMP stack, the P being PHP. Finding that is pretty easy, but I have a further requirement of unixODBC and FreeTDS or some equilant. The project will require a remote connection to a Microsoft SQL 2005 database. Most of the project will use a local MySQL database but it also requires data from a remote MS SQL 2005 database. In my reading it looks like I'll need unixODBC and FreeTDS installed on the server to make that connection. So far I've been unable to find a shared host that provides these. Can anyone suggest or use a host that might work? The project has budget limits so we we're hoping to find a shared host.

    Read the article

  • Query to return internal details about stored function in SQL Server database

    - by Anthony
    I have been given access to a SQL Server database that is currently used by 3rd party app. As such, I don't have any documentation on how that application stores the data or how it retrieves it. I can figure a few things out based on the names of various tables and the parameters that the user-defined functions takes and returns, but I'm still getting errors at every other turn. I was thinking that it would be really helpful if I could see what the stored functions were doing with the parameters given to return the output. Right now all I've been able to figure out is how to query for the input parameters and the output columns. Is there any built-in information_schema table that will expose what the function is doing between input and output?

    Read the article

  • Should frontend and backend handled by different controllers?

    - by DR
    In my previous learning projects I always used a single controller, but know I wonder if that is good practice or even always possible. In all RESTful Rails tutorials the controllers have a show, an edit and an index view. If an authorized user is logged on, the edit view becomes available and the index view shows additional data manipulation controls, like a delete button or a link to the edit view. Now I have a Rails application which falls exactly into this pattern, but the index view is not reusable: The normal user sees a flashy index page with lots of pictures, complex layout, no Javascript requirement, ... The Admin user index has a completly different minimalistic design, jQuery table and lots of additional data, ... Now I'm not sure how to handle this case. I can think of the following: Single controller, single view: The view is split into two large blocks/partials using an if statement. Single controller, two views: index and index_admin. Two different controllers: BookController and BookAdminController None of this solutions seems perfect, but for now I'm inclined to use the 3rd option. What's the preferred way to do this?

    Read the article

  • Querying and ordering results of a database in grails using transient fields

    - by Azder
    I'm trying to display paged data out of a grails domain object. For example: I have a domain object Employee with the properties firstName and lastName which are transient, and when invoking their setter/getter methods they encrypt/decrypt the data. The data is saved in the database in encrypted binary format, thus not sortable by those fields. And yet again, not sortable by transient ones either, as noted in: http://www.grails.org/GSP+Tag+-+sortableColumn . So now I'm trying to find a way to use the transients in a way similar to: Employee.withCriteria( max: 10, offset: 30 ){ order 'lastName', 'asc' order 'firstName', 'asc' } The class is: class Employee { byte[] encryptedFirstName byte[] encryptedLastName static transients = [ 'firstName', 'lastName' ] String getFirstName(){ decrypt("encryptedFirstName") } void setFirstName(String item){ encrypt("encryptedFirstName",item) } String getLastName(){ decrypt("encryptedLastName") } void setLastName(String item){ encrypt("encryptedLastName",item) } }

    Read the article

  • What is the best approach for creating a Common Information Model?

    - by Kaiser Advisor
    Hi, I would like to know the best approach to create a Common Information Model. Just to be clear, I've also heard it referred to as a canonical information model, semantic information model, and master data model - As far as I can tell, they are all referring to the same concept. I've heard in the past that a combined "top-down" and "bottom-up" approach is best. This has the advantage of incorporating "Ivory tower" architects and developers - The work will meet somewhere in the middle and usually be both logical and practical. However, this involves bringing in a lot of people with different skill sets. I've also seen a couple of references to the Distributed Management Task Force, but I can't glean much on best practices in terms of CIM development. This is something I'm quite interested in getting some feedback on since having a strong CIM is a prerequisite to SOA. Thanks for your help! KA Update I've heard another strategy goes along with overall SOA implementation: Get the business involved, and seek executive sponsorship. This would be part of the "Top-down" effort.

    Read the article

  • display HTML content from database with formatting in it

    - by Gaurav Sharma
    Hi all, I have used wmd-editor in my cakephp v1.3 application. The config which I have written is as follows: wmd_options = { output: "HTML", lineLength: 40, buttons: "bold italic | link blockquote code image | ol ul heading hr", autostart: true }; When I submit the form the HTML in the wmd enabled textarea is saved in the database with htmlentities() done to the text then I am displaying it with html_entity_decode() method. but the text is displayed as it is including the HTML coding like this <p><strong>hello dear friends</strong></p>\n\n<pre><code>I want to make sure that everything that you type is visible clearly.\nadasfafas\n</code></pre>\n\n<blockquote>\n <p>sadgsagasdgxcbxcbxc</p>\n</blockquote>\n\n<p><em>sadfgsgasdsgasgs</em></p>\n\n<p><b><a href="http://kumu.in">this is the link</a></b></p> Please help me solve this problem Thanks

    Read the article

  • How to Prove that using subselect queries in SQL is killing performance of server

    - by adopilot
    One of my jobs it to maintain our database, usually we have troubles with lack of performance while getting reports and working whit that base. When I start looking at queries which our ERP sending to database I see a lot of totally needlessly subselect queries inside main queries. As I am not member of developers which is creator of program we using, they do not like much when I criticize they code and job. Let say they do not taking my review as serious statements. So I asking you few questions about subselect in SQL Does subselect is taking a lot of more time then left outer joins? Does exists any blog, article or anything where I subselect is recommended not to use ? How I can prove that if we avoid subselesct in query that query is going to be faster ? Our database server is MSSQL2005

    Read the article

  • What considerations should be made for a web app to be released on a cloud hosted system?

    - by Rhubarb
    I have a web app that is primarily a WordPress app, but it pulls content from a Django app, simply by calling a service that uses Django models. My understanding of cloud computing is a bit vague. If the site needs to scale up with short notice, does the cloud provider (Amazon, Rackspace, whomever) simply spin up new instances (copies) of my initially configured server? How is state managed between all of them? Are there any good primers on this subject? It's hard to find much out there without getting caught up in the marketing.

    Read the article

  • JavaEE Application Server or Lightweight Container?

    - by Jeff Storey
    Let me preface this by saying this is not an actual situation of mine but I'm asking this question more for my own knowledge and to get other people's inputs here. I've used both Spring and EJB3/JBoss, and for the smaller types of applications I've built, Spring (+Tomcat when needed) has been much simpler to use. However, when scaling up to larger applications that require things like load balancing and clustering, is Spring still a viable solution? Or is it time to turn to a solution like EJB3/JBoss when you start to get big enough to need that? I'm not sure if I've scoped the problem well enough to get a good answer, so please let me know. Thanks, Jeff

    Read the article

  • Application Server or Lightweight Container?

    - by Jeff Storey
    Let me preface this by saying this is not an actual situation of mine but I'm asking this question more for my own knowledge and to get other people's inputs here. I've used both Spring and EJB3/JBoss, and for the smaller types of applications I've built, Spring (+Tomcat when needed) has been much simpler to use. However, when scaling up to larger applications that require things like load balancing and clustering, is Spring still a viable solution? Or is it time to turn to a solution like EJB3/JBoss when you start to get big enough to need that? I'm not sure if I've scoped the problem well enough to get a good answer, so please let me know. Thanks, Jeff

    Read the article

  • php, user-uploaded files, version control, and website deployment

    - by user151841
    I have a website that I regularly update the code to. I keep it in version control. When I want to deploy a new version of the site, I do an export and then symlink the served directory name to the directory of the deployment. There is a place where users can upload files, and I noticed once that, after I had deployed a new version, the user files were gone! Of course, I hadn't added them to the repository, and since the served site was from an export, they weren't uploaded into a version-controlled directory anyways. PHP doesn't yet have integrated svn functionality, so I couldn't do much programmatically to user uploaded files. My solution was to create an additional website, files.website.com, which sits in a parallel directory to the served website, and is served out of a directory that is under version control. That way they don't get obliterated when I do an upgrade to the website. From time to time, I manually add uploaded files to the svn project, deleted user-deleted ones, and commit the new version. I'm working on a shell script to run from cron to do this, but it isn't my forte, so it's on the backburner as it's not a pressing need. Is there a better way to do this?

    Read the article

  • Distributed Message Ordering

    - by sbanwart
    I have an architectural question on handling message ordering. For purposes of this question, the transport is irrelevant, so I'm not going to specify one. Say we have three systems, a website, a CRM and an ERP. For this example, the ERP will be the "master" system in terms of data ownership. The website and the CRM can both send a new customer message to the ERP system. The ERP system then adds a customer and publishes the customer with the newly assigned account number so that the website and CRM can add the account number to their local customer records. This is a pretty straight forward process. Next we move on to placing orders. The account number is required in order for the CRM or website to place an order with the ERP system. However the CRM will permit the user to place an order even if the customer lacks an account number. (For this example assume we can't modify the CRM behavior) This creates the possibility that a user could create a new customer, and place an order before the account number gets updated in the CRM. What is the best way to handle this scenario? Would it be best to send the order message sans account number and let it go to an error queue? Would it be better to have the CRM endpoint hold the message and wait until the account number is updated in the CRM? Maybe something completely different that I haven't thought of? Thanks in advance for any help.

    Read the article

  • Architectural decision : QT or Eclipse Platform ?

    - by umanga
    We are in the process of designing a tool to be used with HDEM(High Definition Electron Microscope).We get stacks of 2D images from HDEM and first step is 'detecting borders' on the sections.After detecting edges of 2D slices ,next step is construct the 3D model using these 2D slices. This 'border detecting' algorithm(s) is/are implemented by one of professor and he has used and suggests to use C.(to gain high performance and probably will parallelise in future) We have to develop comprehensive UI ,3D viewer ,2D editor...etc and use this algorithm. Application should support usual features like project save/open.Undo,Redo...etc Our technology decisions are: A) Build entire platform from the scratch using QT. B) Use Eclipse Platform Our concerns are, if we choose A) we can easily integrate the 'border detecting' algorithm(s) because the development environment is C/C++ But we have to implement the basic features from the scratch. If we choose B) we get basic features from the Eclipse platform , but integrating C libraries going to be a tedious task. Any suggestions on this?

    Read the article

  • Cakephp database migration error

    - by Vijay Kumbhar
    Hello All, I am using Ubuntu + cakephp 1.3. I am trying the database migration with the help of cakeDC migration plugin. I configured the plugin as per the instructions. But when i goes to the terminal, goes to the path of the application application_path/app/ dir then fire a command 'cake migration help' it gives me following error, Hello user, Welcome to CakePHP v1.2 Console Current Paths: -working: /path/to/cake/ -root: /path/to/cake/ -app: /path/to/cake/app/ -core: /path/to/cake/ Changing Paths: your working path should be the same as your application path to change your path use the '-app' param. Example: -app relative/path/to/myapp or -app /absolute/path/to/myapp Available Shells: app/vendors/shells/: - none vendors/shells/: - none cake/console/libs/: acl api bake console extract To run a command, type 'cake shell_name [args]' To get help on a specific command, type 'cake shell_name help' Then i followed the steps given in the : http://book.cakephp.org/view/108/The-CakePHP-Console $ cake -app /path/to/app But i am not getting the success. Can anybody help me out from this issue... Thanks in adavnce.

    Read the article

  • Adding data in multiple tables in vb.net

    - by user225269
    This is a winform and I'm using mysql as a database, here is my code: I'm trying to add data into multiple tables. If TextBox14.Text = "" Or TextBox7.Text = "" Or TextBox10.Text = "" Then MsgBox("Please fill up the fields with a labels in bold letters!", MsgBoxStyle.Information) cn = New MySqlConnection("Server=localhost; Database=school;Uid=root;Pwd=nitoryolai123$%^;") 'provider to be used when working with access database cn.Open() cmd = New MySqlCommand("select * from parents, mother, father", cn) cmd.CommandText = "insert into parents values('" + idnum + "','" + p_contact + "','" + p_ad + "')" cmd.CommandText = "insert into mother values('" + idnum + "','" + mother + "','" + mother_occu + "')" cmd.CommandText = "insert into father values('" + idnum + "','" + father + "',''" + father_occu + "')" cmd.ExecuteNonQuery() I get this error, please help: Index and length must refer to a location within the string. Parameter name: length

    Read the article

  • .NET How would I build a DAL to meet my requirments?

    - by Jonno
    Assuming that I must deploy an asp.net app over the following 3 servers: 1) DB - not public 2) 'middle' - not public 3) Web server - public I am not allowed to connect from the web server to the DB directly. I must pass through 'middle' - this is purely to slow down an attacker if they breached the web server. All db access is via stored procedures. No table access. I simply want to provide the web server with a ado dataset (I know many will dislike this, but this is the requirement). Using asmx web services - it works, but XML serialisation is slow and it's an extra set of code to maintain and deploy. Using a ssh/vpn tunnel so that the one connects to the db 'via' the middle server, seems to remove any possible benefit of maintaining 'middle'. Using WCF binary/tcp removes the XML problem, but still there is extra code. Is there an approach that provides the ease of ssh/vpn, but the potential benefit of having the dal on the middle server? Many thanks.

    Read the article

< Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >