Search Results

Search found 22017 results on 881 pages for 'production support'.

Page 121/881 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • How to create a rails staging environment in engineyard?

    - by siulamvictor
    I have a production instance in engineyard up and running well. I would like to create a new staging instance for internal testing. I cloned the existing production instance, changed Framework Environment to staging. I can deploy all the code to staging instance from Github. Engineyard reported the server is fully configured and ready. I have subdomain-fu in my Rails app, as I have some subdomain handling in my app. I set the subdomain initializer like this.... SubdomainFu.tld_sizes = {:development => 1, :test => 0, :production => 1, :staging => 2} As the production instance is using the domain xxxxx.com, I would like my staging instance use the domain staging.xxxxx.com. But I got an error when open this domain. Seems the app use xxxxx.com as domain but not the staging.xxxxx.com. I checked the engineyard database.yml. It use xxxxx_production database, I supposed it should be xxxxx_staging. Seems the engineyard instance is not set to staging environment, but just clone all the setting from production server. Does anyone have experience with this and can show me the way on how to fix it? Thanks. :)

    Read the article

  • Sharepoint: Integrity of lookup fields after a list import

    - by driAn
    Hi there I got a question about the behavior of lookup fields when importing data. I wonder how the lookup fields behave when the list they point to is being replaced/imported. To explain the issue, I will provide a quick example below: As example, assume we have these two sharepoint lists: Product Types ------------- + Type Name + Code Nr + etc Products -------- + Product Name + Product Type (Lookup field to list "Product Types") + etc In my scenario, the Products List contains production data on the production Sharepoint platform. It is filled with data by the business users. However the Product Types list contains rather static data and is maintained by the developer. Now after a development cycle, the developer wants to deploy his new webparts and his new data (product types list). The developer performs the following procedure: On the dev machine: Export "product type" list using stsadm On the production machine: Delete all items in the "product type" list On the production machine: Import the "product type" list using stsadm This means we basically replace the "product type" list on the production server while keeping the "product" list as it is. Now the question: Is this safe? Will the lookup references break under certain circumstances? Any downside of this import/export procedure? What happens if someone accesses a "product" during the import? Will the (now invalid) reference clear its own content (become a null value). What happens if the schema of the "product type" list changes (new column)? Will this cause any troubles? Thanks for all feedback and suggestions!

    Read the article

  • What is the suggested approach to Syncing/Backing up/Restoring from SQL Server 2008 to SQL Server 20

    - by Eoin Campbell
    I only have SQL Server 2008 (Dev Edition) on my development machine I only have SQL Server 2005 available with my hosting company (and I don't have direct connection access to this database) I'm just wondering what the best approach is for: Getting the initlal DB Structure & Data into production. And keeping any structural changes/data changes in sync in future. As far as I can see... Replication - not an option cos I can't connect to the production DB. Restoring a backup - not an option because as far as I can see, you cannot export a DB from 2008 that is restorable in 2005 (even with the 2008 DB set in 2005 compatibility mode) and it wouldn't make sense to be restoring production over the top of my dev version anyway. Dump all the scripts from my 2008 Database, Revert my Dev to machine from 2008 - 2005, and recreate the database from the scripts, then just use backup & restore to get the initial DB into production, then run scripts through the web panel from that point onwards Dump all the scripts from my 2008 Database and generate the entire 2005 db from scripts in production. then run scripts through the web panel from that point onwards With the last 2 options, I'd probably need to script all the data inserts as well using some tool (which I presume exists on the web) Are there any other possibile solutions that I'm not considering.

    Read the article

  • A simple Python deployment problem - a whole world of pain

    - by Evgeny
    We have several Python 2.6 applications running on Linux. Some of them are Pylons web applications, others are simply long-running processes that we run from the command line using nohup. We're also using virtualenv, both in development and in production. What is the best way to deploy these applications to a production server? In development we simply get the source tree into any directory, set up a virtualenv and run - easy enough. We could do the same in production and perhaps that really is the most practical solution, but it just feels a bit wrong to run svn update in production. We've also tried fab, but it just never works first time. For every application something else goes wrong. It strikes me that the whole process is just too hard, given that what we're trying to achieve is fundamentally very simple. Here's what we want from a deployment process. We should be able to run one simple command to deploy an updated version of an application. (If the initial deployment involves a bit of extra complexity that's fine.) When we run this command it should copy certain files, either out of a Subversion repository or out of a local working copy, to a specified "environment" on the server, which probably means a different virtualenv. We have both staging and production version of the applications on the same server, so they need to somehow be kept separate. If it installs into site-packages, that's fine too, as long as it works. We have some configuration files on the server that should be preserved (ie. not overwritten or deleted by the deployment process). Some of these applications import modules from other applications, so they need to be able to reference each other as packages somehow. This is the part we've had the most trouble with! I don't care whether it works via relative imports, site-packages or whatever, as long as it works reliably in both development and production. Ideally the deployment process should automatically install external packages that our applications depend on (eg. psycopg2). That's really it! How hard can it be?

    Read the article

  • MySQL table data transformation -- how can I dis-aggregate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • Union on two tables with a where clause in the one

    - by Lostdrifter
    Currently I have 2 tables, both of the tables have the same structure and are going to be used in a web application. the two tables are production and temp. The temp table contains one additional column called [signed up]. Currently I generate a single list using two columns that are found in each table (recno and name). Using these two fields I'm able to support my web application search function. Now what I need to do is support limiting the amount of items that can be used in the search on the second table. the reason for this is become once a person is "signed up" a similar record is created in the production table and will have its own recno. doing: Select recno, name from production UNION ALL Select recno, name from temp ...will show me everyone. I have tried: Select recno, name from production UNION ALL Select recno, name from temp WHERE signup <> 'Y' But this returns nothing? Can anyone help?

    Read the article

  • Cherrypicking versus Rebasing

    - by Lakshman Prasad
    The following is a scenario I commonly face: You have a set of commits on master or design, that I want to put on top of production branch. I tend to create a new branch with the base as production cherry-pick these commits on it and merge it to production Then when I merge master to production, I face merge conflicts because even tho the changes are same, but are registered as a different commit because of cherry-pick. I have found some workarounds to deal with this, all of which are laborious and can be termed "hacks". Altho' I haven't done too much rebasing, I believe that too creates a new commit hash. Should I be using rebasing where I am cherrypicking. What other advantages does that have over this.

    Read the article

  • MySQL table data transformation -- how can I dis-aggreate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • advice on setting up SVN

    - by Vivek Chandraprakash
    I'm trying to setup an svn server. I maintain couple of websites based on asp. There are three environments currently. Development: Any new modules/enhancements will be done in this environment Staging: Mirror of production Production: The public facing website. Currently when there's an update to the website, this is what we do do the update in development copy file to staging copy file to production In production we take a backup of the old file by renaming it. I would like to make it simpler by installing SVN and stop the file renaming thing. But im not sure how many repositories to have per website. should be it be three or two? I'm absolutely new to svn. Just installed it in a linux based server (ubuntu). Can you pls advice how to go about it? Thanks -Vivek

    Read the article

  • Debugging CodeIgniters 404 errors

    - by Alex
    I'm having a nightmare uploading my site to the production server. The site runs fine locally and on a staging server (exactly the same server, settings as the production site). However when I deploy to production I'm getting a 404 error from CI. CodeIgniters 404 error pages are frustrating because it seems as if i can't access other libraries from them. How can I go about debugging the error? See which controller is trying to be called etc.

    Read the article

  • Compare structures of two databases?

    - by streetparade
    Hello, I wanted to ask whether it is possible to compare the complete database structure of two huge databases. We have two databases, the one is a development database, the other a production database. I've sometimes forgotten to make changes in to the production database, before we released some parts of our code, which results that the production database doesn't have the same structure, so if we release something we got some errors. Is there a way to compare the two, or synchronize?

    Read the article

  • Partial Git deployment strategy?

    - by MatW
    I need to setup a Kohana dev environment that allows me to make full use of shared module / system classes across separate applications. Each application typically belonging to a different client. I use Git for source control, but am struggling to come up with a clean deployment method that will allow me to pull only those parts of the dev environment specific to a client / app down into that client's production environment (assuming that the client's production environment will have Git installed). Dev enviroment: - kohana - applications - clientapp1 - clientapp2 - modules - public_html - clientapp1 - clientapp2 - system - 3.0.1 - 3.0.5 Client 1's production environment: - / - applications - clientapp1 - modules - public_html - client_app1 - system - 3.0.5 Naturally, I want to have total control over each client "sub repo" as if it were an independent repo (in terms of gitignore, etc). I have seen topics that cover Git's sparse checkout feature, but it seems like it may cause a few problems down the line from a maintenance point of view, and I don't like the idea of the entire repo's metadata existing in client's production environment repo. As you can probably tell, I'm not exactly a Git poweruser, so any suggestions / wisdom are very welcome!

    Read the article

  • Deploying Multiple Environments in Spring-MVC

    - by jboyd
    Currently all web apps are deployed using seperate config files: <!-- <import bean.... production/> --> <import bean... development/> This has disadvantages, even if you only need to swap out one config file, that I'm sure everyone is familiar with (wondering what just deployed without searching through XML is one of them) I want to add Logging to my application that basically says 'RUNNING IN PRODUCTION MODE', with a description of the services deployed and what mode they are working in. RUNNING IN PRODUCTION MODE Client Service - Production Messaging Service - Local and so on... Is this possible in Spring using a conventional deployment (putting a war on a server)? What other things do people do to manage deployments and software configurations? If not, what other ways could you achieve something similar?

    Read the article

  • "One or more breakpoints cannot be set and have been disabled. Execution will stop at the beginning

    - by sam
    I set a breakpoint in my code in Visual-C++, but when I run, I see the error mentioned in the title. I know this question has been asked before on Stack Overflow (http://stackoverflow.com/questions/657470/breakpoints-cannot-be-set-and-have-been-disabled-problem), but none of the answers there fully explained the problem I'm seeing. The closest I can see is something about the linker, but I don't understand that - so if someone could explain in more detail that would be great. In my case, I have 2 projects in Visual C++ - the production dsw, and the test code dsw. I have loaded and rebuilt both dsws in debug mode. I want a breakpoint in the production code, which is run via the test scripts. My issue is I get the error message when I run the test code, because the break point is in the production code, which isn't loaded up when the test starts. Near the beginning of the test script there is a mytest_initialize() command. I imagine this goes off and loads up the production dll. Once this line has executed, I can put the breakpoint in my production code and run until I hit it. But it's quite annoying to have to run to this line, set the breakpoint and continue every time I want to run the test. So I think the problem is Visual C++ doesn't realise the two projects are related. Is this a linker issue? What does the linker do and what settings should I change to make this work? Thanks in advance. Apologies if instead I should be appending this question to the existing one, this is my first post so not quite sure how this should work.

    Read the article

  • How do I get Phusion Passenger to work with Django for App Engine?

    - by Mike
    I'm having a devil of a time getting Phusion Passenger to work with django-nonrel for Google's App Engine. I can seem to get it to work for GoogleAppEngineLauncher and for the production server but not Passenger; or for Passenger and GoogleAppEngineLauncher but not the production server; or for Passenger and the production server but not GoogleAppEngineLauncher. How do I get my app to deploy on all three?

    Read the article

  • SQL Server job (stored proc) trace

    - by Jit
    Hi Friends, I need your suggestion on tracing the issue. We are running data load jobs at early morning and loading the data from Excel file into SQL Server 2005 db. When job runs on production server, many times it takes 2 to 3 hours to complete the tasks. We could drill down to one job step which is taking 99% of the total time to finish. While running the job step (stored procs) on staging environment (with the same production database restored) takes 9 to 10 minutes, the same takes hours on production server when it run at early morning as part of job. The production server always stuck up at the very job step. I would like to run trace on the very job step (around 10 stored procs run for each user in while loop within the job step) and collect the info to figure out the issue. What are the ways available in SQL Server 2005 to achieve the same? I want to run the trace only for these SPs and not for certain period time period on production server, as trace give lots of information and it becomes very difficult for me (as not being DBA) to analyze that much of trace information and figure out the issue. So I want to collect info about specific SPs only. Let me know what you suggest. Appreciate your time and help. Thanks.

    Read the article

  • SQL Server insert slow

    - by andrew007
    Hi, I have two servers where I installed SQL Server 2008 Production: RAID 1 on SCSI disks Test: IDE disk When I try to execute a script with about 35.000 inserts, on the test server I need 30 sec and instead on the production server more than 2 min! Does anybody know why such difference? I mean, the DB is configured in the same way and the production server has also a RAID config, a better processor and memory... THANKS!

    Read the article

  • RMO on Windows 7 - The Specified module could not be found

    - by AGoodDisplayName
    My machine: Windows XP (x86), VS2008, .net 3.5, sql server 2005, WinForms - App works fine. Production Machines: Windows 7 (x64), SQl Server 2005 Express - App Starts but throws exception Visual Studio Targeting x86 on setup project and RMO project. Visual Studio gives me a a couple warnings but I can still build: Unable to find dependency 'MICROSOFT.SQLSERVER.MANAGEMENT.SQLPARSER' (Signature='89845DCD8080CC91' Version='10.0.0.0') of assembly 'Microsoft.SqlServer.Smo.dll' Unable to find dependency 'MICROSOFT.SQLSERVER.MANAGEMENT.SQLPARSER' (Signature='89845DCD8080CC91' Version='10.0.0.0') of assembly 'Microsoft.SqlServer.Management.SmoMetadataProvider.dll' This is a simple RMO (Replication Management Objects) app that initiates a pull subscription in SQL Server 2005 and displays status. Works fine on my machine, but fails on the production machine. I'm using a setup project to install the app on the production machine, but I think I'm missing a dependency somewhere, but I can't figure it out. On the production machine the app starts fine, but when I try to synch the subscription i get: System.IO.FileNotFoundException: The Specified module could not be found. (Exception from HResult: 0x8007007E)

    Read the article

  • Remove redundant entries, scala way

    - by andersbohn
    Edit: Added the fact, that the list is sorted, and realizing 'duplicate' is misleading, replaced that with 'redundant' in the title. I have a sorted list of entries stating a production value in a given interval. Entries stating the exact same value at a later time adds no information and can safely be left out. case class Entry(minute:Int, production:Double) val entries = List(Entry(0, 100.0), Entry(5, 100.0), Entry(10, 100.0), Entry(20, 120.0), Entry(30, 100.0), Entry(180, 0.0)) Experimenting with the scala 2.8 collection functions, so far I have this working implementation: entries.foldRight(List[Entry]()) { (entry, list) => list match { case head :: tail if (entry.production == head.production) => entry :: tail case head :: tail => entry :: list case List() => entry :: List() } } res0: List[Entry] = List(Entry(0,100.0), Entry(20,120.0), Entry(30,100.0), Entry(180,0.0)) Any comments? Am I missing out on some scala magic?

    Read the article

  • How do I effectively store a connection string in machine.config only?

    - by Scott Bedwell
    We are moving to an environment with multiple engines of MS SQL running on the same server (a test engine and a production engine). We also have separate test and production web servers, and would like for our asp.net applications to "magically" use the test database engine on the test web server and the production database engine on the production web servers. We would like to store the connection strings in machine.config rather than in web.config, but when we put it in machine.config, visual studio's IDE (particularly with datasets) does not recognize that the machine.config contains the connection. Does anyone know of a solution for displaying these machine.config connection strings in visual studio, or of a different solution that would accommodate for this? Thanks.

    Read the article

  • How to disable activerecord cache logging in rails

    - by user1508459
    I'm trying to disable logging of caching in production. Have succeeded in getting SQL to stop logging queries, but no luck with caching log entries. Example line in production log: CACHE (0.0ms) SELECT merchants.* FROM merchants WHERE merchants.id = 1 LIMIT 1 I do not want to disable all logging, since I want logger.debug statements to show up in the production log. Using rails 3.2.1 with Mysql and Apache. Any suggestions?

    Read the article

  • Remove duplicates in entries, scala way

    - by andersbohn
    I have a list of entries stating a production value in a given interval. Entries stating the exact same value at a later time adds no information and can safely be left out. case class Entry(minute:Int, production:Double) val entries = List(Entry(0, 100.0), Entry(5, 100.0), Entry(10, 100.0), Entry(20, 120.0), Entry(30, 100.0), Entry(180, 0.0)) Experimenting with the scala 2.8 collection functions, so far I have this working implementation: entries.foldRight(List[Entry]()) { (entry, list) => list match { case head :: tail if (entry.production == head.production) => entry :: tail case head :: tail => entry :: list case List() => entry :: List() } } res0: List[Entry] = List(Entry(0,100.0), Entry(20,120.0), Entry(30,100.0), Entry(180,0.0)) Any comments? Am I missing out on some scala magic?

    Read the article

  • 10 Steps to access Oracle stored procedures from Crystal Reports

    Requirements to access Oracle stored procedures from CR The following requirements must be met in order for CR to access an Oracle stored procedure: 1. You must create a package that defines the REF CURSOR. This REF CURSOR must be strongly bound to a static pre-defined structure (see Strongly Bound REF CURSORs vs Weakly Bound REF CURSORs). This package must be created separately and before the creation of the stored procedure. NOTE Crystal Reports 9 native connections will support Oracle stored procedures created within packages as well as Oracle stored procedures referencing weakly bound REF CURSORs. Crystal Reports 8.5 native connections will support Oracle stored procedures referencing weakly bound REF CURSORs. 2. The procedure must have a parameter that is a REF CURSOR type. This is because CR uses this parameter to access and define the result set that the stored procedure returns. 3. The REF CURSOR parameter must be defined as IN OUT (read/write mode). After the procedure has opened and assigned a query to the REF CURSOR, CR will perform a FETCH call for every row from the query's result. This is why the parameter must be defined as IN OUT. 4. Parameters can only be input (IN) parameters. CR is not designed to work with OUT parameters. 5. The REF CURSOR variable must be opened and assigned its query within the procedure. 6. The stored procedure can only return one record set. The structure of this record set must not change, based on parameters. 7. The stored procedure cannot call another stored procedure. 8. If using an ODBC driver, it must be the CR Oracle ODBC driver (installed by CR). Other Oracle ODBC drivers (installed by Microsoft or Oracle) may not function correctly. 9. If you are using the CR ODBC driver, you must ensure that in the ODBC Driver Configuration setup, under the Advanced Tab, the option 'Procedure Return Results' is checked ON. 10. If you are using the native Oracle driver and using hard-coded date selection within the procedure, the date selection must use either a string representation format of 'YYYY-DD-MM' (i.e. WHERE DATEFIELD = '1999-01-01') or the TO_DATE function with the same format specified (i.e. WHERE DATEFIELD = TO_DATE ('1999-01-01','YYYY-MM-DD'). For more information, refer to kbase article C2008023. 11. Most importantly, this stored procedure must execute successfully in Oracle's SQL*Plus utility. If all of these conditions are met, you must next ensure you are using the appropriate database driver. Please refer to the sections in this white paper for a list of acceptable database drivers. span.fullpost {display:none;}

    Read the article

  • 10 Steps to access Oracle stored procedures from Crystal Reports

    Requirements to access Oracle stored procedures from CR The following requirements must be met in order for CR to access an Oracle stored procedure: 1. You must create a package that defines the REF CURSOR. This REF CURSOR must be strongly bound to a static pre-defined structure (see Strongly Bound REF CURSORs vs Weakly Bound REF CURSORs). This package must be created separately and before the creation of the stored procedure. NOTE Crystal Reports 9 native connections will support Oracle stored procedures created within packages as well as Oracle stored procedures referencing weakly bound REF CURSORs. Crystal Reports 8.5 native connections will support Oracle stored procedures referencing weakly bound REF CURSORs. 2. The procedure must have a parameter that is a REF CURSOR type. This is because CR uses this parameter to access and define the result set that the stored procedure returns. 3. The REF CURSOR parameter must be defined as IN OUT (read/write mode). After the procedure has opened and assigned a query to the REF CURSOR, CR will perform a FETCH call for every row from the query's result. This is why the parameter must be defined as IN OUT. 4. Parameters can only be input (IN) parameters. CR is not designed to work with OUT parameters. 5. The REF CURSOR variable must be opened and assigned its query within the procedure. 6. The stored procedure can only return one record set. The structure of this record set must not change, based on parameters. 7. The stored procedure cannot call another stored procedure. 8. If using an ODBC driver, it must be the CR Oracle ODBC driver (installed by CR). Other Oracle ODBC drivers (installed by Microsoft or Oracle) may not function correctly. 9. If you are using the CR ODBC driver, you must ensure that in the ODBC Driver Configuration setup, under the Advanced Tab, the option 'Procedure Return Results' is checked ON. 10. If you are using the native Oracle driver and using hard-coded date selection within the procedure, the date selection must use either a string representation format of 'YYYY-DD-MM' (i.e. WHERE DATEFIELD = '1999-01-01') or the TO_DATE function with the same format specified (i.e. WHERE DATEFIELD = TO_DATE ('1999-01-01','YYYY-MM-DD'). For more information, refer to kbase article C2008023. 11. Most importantly, this stored procedure must execute successfully in Oracle's SQL*Plus utility. If all of these conditions are met, you must next ensure you are using the appropriate database driver. Please refer to the sections in this white paper for a list of acceptable database drivers. span.fullpost {display:none;}

    Read the article

  • Looking for USB Network Attached Storage (NAS) Adapter that supports multiple drives, NTFS/FAT32 fil

    - by braveterry
    I'm looking for a NAS Adapter that supports attaching multiple USB devices to the network. Here's what I'd like to see in a NAS adapter: Under $100.00. Support for multiple devices. This can be through a USB hub or through multiple USB connectors on the device itself. Bittorrent support would be nice, but this isn't a deal-breaker. Filesystem support for at least NTFS or FAT32. I'd prefer to not have to reformat to use the device, but this is also not a deal-breaker. Here is what I am NOT looking for: I'm NOT looking for a NAS enclosure. I already have a couple of spare external USB drives that I'd like to use. I'm NOT looking for a networked USB hub like the one mentioned here. Network USB hubs only allow access to a drive from one PC at a time. I'm NOT looking for a wireless router with a NAS built in. I already have a wireless router, and I'd rather not go through the hassle of replacing it if possible. What I've looked at so far PogoPlug: This supports multiple devices via a USB hub, but there's not Bittorrent support. It's $99.00, so I may end up going with this and hope that they patch in Bittorrent support later. Addonics NAS Adapter: This supports only one device per adapter, so it's a non-starter. SimpleNET NAS Head USB 2.0 Portable Dongle: I'm not 100% sure this supports multiple devices. Plus there doesn't seem to be any Bittorrent support. I'll try to update this post as I explore other devices.

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >