Search Results

Search found 117232 results on 4690 pages for 'sql user group'.

Page 476/4690 | < Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >

  • Rails 3 MySQL 2 reports an error in what looks to be valid SQL syntax

    - by John Judd
    I am trying to use the following bit of code to help in seeding my database. I need to add data continually over development and do not want to have to completely reseed data every time I add something new to the seeds.rb file. So I added the following function to insert the data if it doesn't already exist. def AddSetting(group, name, value, desc) Admin::Setting.create({group: group, name: name, value: value, description: desc}) unless Admin::Setting.find_by_sql("SELECT * FROM admin_settings WHERE group = '#{group}' AND name = '#{name}';").exists? end AddSetting('google', 'analytics_id', '', 'The ID of your Google Analytics account.') AddSetting('general', 'page_title', '', '') AddSetting('general', 'tag_line', '', '') This function is included in the db/seeds.rb file. Is this the right way to do this? However I am getting the following error when I try to run it through rake. rake aborted! Mysql2::Error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'group = 'google' AND name = 'analytics_id'' at line 1: SELECT * FROM admin_settings WHERE group = 'google' AND name = 'analytics_id'; Tasks: TOP => db:seed (See full trace by running task with --trace) Process finished with exit code 1 What is confusing me is that I am generating correct SQL as far as I can tell. In fact my code generates the SQL and I pass that to the find_by_sql function for the model, Rails itself can't be changing the SQL, or is it? SELECT * FROM admin_settings WHERE group = 'google' AND name = 'analytics_id'; I've written a lot of SQL over the years and I've looked through similar questions here. Maybe I've missed something, but I cannot see it.

    Read the article

  • SQL: Getting the full record with the highest count.

    - by sqlnoob
    I'm trying to write sql that produces the desired result from the data below. data: ID Num Opt1 Opt2 Opt3 Count 1 A A E 1 1 A B J 4 2 A A E 9 3 B A F 1 3 B C K 14 4 A A M 3 5 B D G 5 6 C C E 13 6 C C M 1 desired result: ID Num Opt1 Opt2 Opt3 Count 1 A B J 4 2 A A E 9 3 B C K 14 4 A A M 3 5 B D G 5 6 C C E 13 Essentially I want, for each ID Num, the full record with the highest count. I tried doing a group by, but if I group by Opt1, Opt2, Opt3, this doesn't work because it returns the highest count for each (ID Num, Opt2, Opt3, Opt4) combination which is not what I want. If I only group by ID Num, I can get the max for each ID Num but I lose the information as to which (Opt1, Opt2, Opt3) combination gives this count. I feel like I've done this before, but I don't often work with sql and I can't remember how. Is there an easy way to do this?

    Read the article

  • Database users in the Oracle Utilities Application Framework

    - by Anthony Shorten
    I mentioned the product database users fleetingly in the last blog post and they deserve a better mention. This applies to all versions of the Oracle Utilities Application Framework. The Oracle Utilities Application Framework uses up to three users initially as part of the base operations of the product. The type of database supported (the framework supports Oracle, IBM DB2 and Microsoft SQL Server) dictates the number of users used and their permissions. For publishing brevity I will outline what is available for the Oracle database and, in summary, mention where it differs for the other database supported. For Oracle database customers we ship three distinct database users: Administration User (SPLADM or CISADM by default) - This is the database user that actually owns the schema. This user is not used by the product to do any DML (Data Manipulation Language) SQL other than that is necessary for maintenance of the database. This database user performs all the DCL (Data Control Language) and DDL (Data Definition Language) against the database. It is typically reserved for Database Administration use only. Product Read Write User (SPLUSER or CISUSER by default) - This is the database user used by the product itself to execute DML (Data Manipulation Language) statements against the schema owned by the Administration user. This user has the appropriate read and write permission to objects within the schema owned by the Administration user. For databases such as DB2 and SQL Server we may not create this user but use other DCL (Data Control Language) statements and facilities to simulate this user. Product Read User (SPLREAD or CISREAD by default) - This is the database that has read only permission to the schema owned by the Administration user. It is used for reporting or any part of the product or interface that requires read permissions to the database (for example, products that have ConfigLab and Archiving use this user for remote access). For databases such as DB2 and SQL Server we may not create this user but use other DCL (Data Control Language) statements and facilities to simulate this user. You may notice the words by default in the list above. The values supplied with the installer are the default and can be changed to what the site standard or implementation wants to use (as long as they conform to the standards supported by the underlying database). You can even create multiples of each within the same database and pointing to same schema. To manage the permissions for the users, there is a utility provided with the installation (oragensec (Oracle), db2gensec (DB2) or msqlgensec (SQL Server)) that generates the security definitions for the above users. That can be executed a number of times for each schema to give users appropriate permissions. For example, it is possible to define more than one read/write User to access the database. This is a common technique used by implementations to have a different user per access mode (to separate online and batch). In fact you can also allocate additional security (such as resource profiles in Oracle) to limit the impact of specific users at the database. To facilitate users and permissions, in Oracle for example, we create a CISREAD role (read only role) and a CISUSER role (read write role) that can be allocated to the appropriate database user. When the security permissions utility, oragensec in this case, is executed it uses the role to determine the permissions. To give you a case study, my underpowered laptop has multiple installations on it of multiple products but I have one database. I create a different schema for each product and each version (with my own naming convention to help me manage the databases). I create individual users on each schema and run oragensec to maintain the permissions for each appropriately. It works fine as long I have setup the userids appropriately. This means: Creating the users with the appropriate roles. I use the common CISUSER and CISREAD role across versions and across Oracle Utilities Application Framework products. Just remember to associate the CISUSER role with the database user you want to use for read/write operations and the CISREAD role with the user you wish to use for the read only operations. The role is treated as a tag to indicate the oragensec utility which appropriate permissions to assign to the user. The utilities for the other database types essentially do the same, obviously using the technology available within those databases. Run oragensec against the read write user and read only user against the appropriate administration user (I will abbreviate the user to ADM user). This ensures the right permissions are allocated to the right users for the right products. To help me there, I use the same prefix on the user name for the same product. For example, my Oracle Utilities Application Framework V4 environment has the administration user set to FW4ADM and the associated FW4USER and FW4READ as the users for the product to use. For my MWM environment I used MWMADM for the administration user and MWMUSER and MWMREAD for my associated users. You get the picture. When I run oragensec (once for each ADM user), I know what other users to associate with it. Remember to rerun oragensec against the users if I run upgrades, service packs or database based single fixes. This assures that the users are in synchronization with the ADM user. As a side note, for those who do not understand the difference between DML, DCL and DDL: DDL (Data Definition Language) - These are SQL statements that define the database schema and the structures within. SQL Statements such as CREATE and DROP are examples of DDL SQL statements. DCL (Data Control Language) - These are the SQL statements that define the database level permissions to DDL maintained objects within the database. SQL Statements such as GRANT and REVOKE are examples of DCL SQL statements. DML (Database Manipulation Language) - These are SQL statements that alter the data within the tables. SQL Statements such as SELECT, INSERT, UPDATE and DELETE are examples of DML SQL statements. Hope this has clarified the database user support. Remember in Oracle Utilities Application Framework V4 we enhanced this by also supporting CLIENT_IDENTIFIER to allow the database to still use the administration user for the main processing but make the database session more traceable.

    Read the article

  • Aggregate SharePoint Event/Items with Exchange appointments into your Calendar view using Calendar O

    - by eJugnoo
    In continuation of my previous post about using Calendar Overlay with new SharePoint 2010 when you have other Calendar view in any other lists in SharePoint. Now the other option for Overlay we have is with Exchange. You can overlay current users (logged in user) personal Calendar (from Exchange) onto a existing SharePoint calendar, in any list, by using new Overlay feature. Here is an example: Yes, you have to point to your OWA and Exchange WS url. It can also go and find your web service url, when you click find. In my case, it converted machine name into FQDN. That was smart… I had initial configuration issue, that my test users (Administrator!) didn’t have corresponding Exchange e-mail in SharePoint profile. So you have to ensure that your profiles are in sync with AD/Exchange for e-mail. It picks up current user’s e-mail from profile to pull data from Exchange calendar. My calendar in OWA… Same calendar in Outlook 2010… I think, new Calendar Overlay feature fills a great void. Users can now view SharePoint information within context of their personal calendar. Which is simply great! Enjoy new SharePoint 2010. --Sharad

    Read the article

  • Running SSIS packages from C#

    - by Piotr Rodak
    Most of the developers and DBAs know about two ways of deploying packages: You can deploy them to database server and run them using SQL Server Agent job or you can deploy the packages to file system and run them using dtexec.exe utility. Both approaches have their pros and cons. However I would like to show you that there is a third way (sort of) that is often overlooked, and it can give you capabilities the ‘traditional’ approaches can’t. I have been working for a few years with applications that run packages from host applications that are implemented in .NET. As you know, SSIS provides programming model that you can use to implement more flexible solutions. SSIS applications are usually thought to be batch oriented, with fairly rigid architecture and processing model, with fixed timeframes when the packages are executed to process data. It doesn’t to be the case, you don’t have to limit yourself to batch oriented architecture. I have very good experiences with service oriented architectures processing large amounts of data. These applications are more complex than what I would like to show here, but the principle stays the same: you can execute packages as a service, on ad-hoc basis. You can also implement and schedule various signals, HTTP calls, file drops, time schedules, Tibco messages and other to run the packages. You can implement event handler that will trigger execution of SSIS when a certain event occurs in StreamInsight stream. This post is just a small example of how you can use the API and other features to create a service that can run SSIS packages on demand. I thought it might be a good idea to implement a restful service that would listen to requests and execute appropriate actions. As it turns out, it is trivial in C#. The application is implemented as console application for the ease of debugging and running. In reality, you might want to implement the application as Windows service. To begin, you have to reference namespace System.ServiceModel.Web and then add a few lines of code: Uri baseAddress = new Uri("http://localhost:8011/");               WebServiceHost svcHost = new WebServiceHost(typeof(PackRunner), baseAddress);                           try             {                 svcHost.Open();                   Console.WriteLine("Service is running");                 Console.WriteLine("Press enter to stop the service.");                 Console.ReadLine();                   svcHost.Close();             }             catch (CommunicationException cex)             {                 Console.WriteLine("An exception occurred: {0}", cex.Message);                 svcHost.Abort();             } The interesting lines are 3, 7 and 13. In line 3 you create a WebServiceHost object. In line 7 you start listening on the defined URL and then in line 13 you shut down the service. As you have noticed, the WebServiceHost constructor is accepting type of an object (here: PackRunner) that will be instantiated as singleton and subsequently used to process the requests. This is the class where you put your logic, but to tell WebServiceHost how to use it, the class must implement an interface which declares methods to be used by the host. The interface itself must be ornamented with attribute ServiceContract. [ServiceContract]     public interface IPackRunner     {         [OperationContract]         [WebGet(UriTemplate = "runpack?package={name}")]         string RunPackage1(string name);           [OperationContract]         [WebGet(UriTemplate = "runpackwithparams?package={name}&rows={rows}")]         string RunPackage2(string name, int rows);     } Each method that is going to be used by WebServiceHost has to have attribute OperationContract, as well as WebGet or WebInvoke attribute. The detailed discussion of the available options is outside of scope of this post. I also recommend using more descriptive names to methods . Then, you have to provide the implementation of the interface: public class PackRunner : IPackRunner     {         ... There are two methods defined in this class. I think that since the full code is attached to the post, I will show only the more interesting method, the RunPackage2.   /// <summary> /// Runs package and sets some of its variables. /// </summary> /// <param name="name">Name of the package</param> /// <param name="rows">Number of rows to export</param> /// <returns></returns> public string RunPackage2(string name, int rows) {     try     {         string pkgLocation = ConfigurationManager.AppSettings["PackagePath"];           pkgLocation = Path.Combine(pkgLocation, name.Replace("\"", ""));           Console.WriteLine();         Console.WriteLine("Calling package {0} with parameter {1}.", name, rows);                  Application app = new Application();         Package pkg = app.LoadPackage(pkgLocation, null);           pkg.Variables["User::ExportRows"].Value = rows;         DTSExecResult pkgResults = pkg.Execute();         Console.WriteLine();         Console.WriteLine(pkgResults.ToString());         if (pkgResults == DTSExecResult.Failure)         {             Console.WriteLine();             Console.WriteLine("Errors occured during execution of the package:");             foreach (DtsError er in pkg.Errors)                 Console.WriteLine("{0}: {1}", er.ErrorCode, er.Description);             Console.WriteLine();             return "Errors occured during execution. Contact your support.";         }                  Console.WriteLine();         Console.WriteLine();         return "OK";     }     catch (Exception ex)     {         Console.WriteLine(ex);         return ex.ToString();     } }   The method accepts package name and number of rows to export. The packages are deployed to the file system. The path to the packages is configured in the application configuration file. This way, you can implement multiple services on the same machine, provided you also configure the URL for each instance appropriately. To run a package, you have to reference Microsoft.SqlServer.Dts.Runtime namespace. This namespace is implemented in Microsoft.SQLServer.ManagedDTS.dll which in my case was installed in the folder “C:\Program Files (x86)\Microsoft SQL Server\100\SDK\Assemblies”. Once you have done it, you can create an instance of Microsoft.SqlServer.Dts.Runtime.Application as in line 18 in the above snippet. It may be a good idea to create the Application object in the constructor of the PackRunner class, to avoid necessity of recreating it each time the service is invoked. Then, in line 19 you see that an instance of Microsoft.SqlServer.Dts.Runtime.Package is created. The method LoadPackage in its simplest form just takes package file name as the first parameter. Before you run the package, you can set its variables to certain values. This is a great way of configuring your packages without all the hassle with dtsConfig files. In the above code sample, variable “User:ExportRows” is set to value of the parameter “rows” of the method. Eventually, you execute the package. The method doesn’t throw exceptions, you have to test the result of execution yourself. If the execution wasn’t successful, you can examine collection of errors exposed by the package. These are the familiar errors you often see during development and debugging of the package. I you run the package from the code, you have opportunity to persist them or log them using your favourite logging framework. The package itself is very simple; it connects to my AdventureWorks database and saves number of rows specified in variable “User::ExportRows” to a file. You should know that before you run the package, you can change its connection strings, logging, events and many more. I attach solution with the test service, as well as a project with two test packages. To test the service, you have to run it and wait for the message saying that the host is started. Then, just type (or copy and paste) the below command to your browser. http://localhost:8011/runpackwithparams?package=%22ExportEmployees.dtsx%22&rows=12 When everything works fine, and you modified the package to point to your AdventureWorks database, you should see "OK” wrapped in xml: I stopped the database service to simulate invalid connection string situation. The output of the request is different now: And the service console window shows more information: As you see, implementing service oriented ETL framework is not a very difficult task. You have ability to configure the packages before you run them, you can implement logging that is consistent with the rest of your system. In application I have worked with we also have resource monitoring and execution control. We don’t allow to run more than certain number of packages to run simultaneously. This ensures we don’t strain the server and we use memory and CPUs efficiently. The attached zip file contains two projects. One is the package runner. It has to be executed with administrative privileges as it registers HTTP namespace. The other project contains two simple packages. This is really a cool thing, you should check it out!

    Read the article

  • /planes and /clubs or /wiki/planes and /wiki/clubs

    - by Jelmer
    I am currently working on a nice application about which I can't share all the details, but it will have some sort of a wiki part. In this wiki, you will be able to change the planes as well as the clubs, maybe in the future it will be possible to change the countries and manufacturers as well. But I have to think about this and I have to check how good this is. But, you will understand that it has to be expendable! That is really important. Use the planes controller with a edit page and the same for the clubs Route the planes and clubs controller to the wiki controller, so we have 1 nice "path" to edit this stuff. I want to have it called wiki that is for sure. Because that is what it is, but I am storing the planes and clubs data in its down table in my database. I think that is kinda obvious since it has to be maintainable. Right now you could edit a plane via the url: example.com/wiki/planes/edit/Duo_Discus.html Do you think that is better than example.com/planes/edit/Duo_Discus.html since it is easy to understand for the user, that he is working in the wiki instead of in the planes ? Or do you think this will break the user experience?

    Read the article

  • Building dynamic OLAP data marts on-the-fly

    - by DrJohn
    At the forthcoming SQLBits conference, I will be presenting a session on how to dynamically build an OLAP data mart on-the-fly. This blog entry is intended to clarify exactly what I mean by an OLAP data mart, why you may need to build them on-the-fly and finally outline the steps needed to build them dynamically. In subsequent blog entries, I will present exactly how to implement some of the techniques involved. What is an OLAP data mart? In data warehousing parlance, a data mart is a subset of the overall corporate data provided to business users to meet specific business needs. Of course, the term does not specify the technology involved, so I coined the term "OLAP data mart" to identify a subset of data which is delivered in the form of an OLAP cube which may be accompanied by the relational database upon which it was built. To clarify, the relational database is specifically create and loaded with the subset of data and then the OLAP cube is built and processed to make the data available to the end-users via standard OLAP client tools. Why build OLAP data marts? Market research companies sell data to their clients to make money. To gain competitive advantage, market research providers like to "add value" to their data by providing systems that enhance analytics, thereby allowing clients to make best use of the data. As such, OLAP cubes have become a standard way of delivering added value to clients. They can be built on-the-fly to hold specific data sets and meet particular needs and then hosted on a secure intranet site for remote access, or shipped to clients' own infrastructure for hosting. Even better, they support a wide range of different tools for analytical purposes, including the ever popular Microsoft Excel. Extension Attributes: The Challenge One of the key challenges in building multiple OLAP data marts based on the same 'template' is handling extension attributes. These are attributes that meet the client's specific reporting needs, but do not form part of the standard template. Now clearly, these extension attributes have to come into the system via additional files and ultimately be added to relational tables so they can end up in the OLAP cube. However, processing these files and filling dynamically altered tables with SSIS is a challenge as SSIS packages tend to break as soon as the database schema changes. There are two approaches to this: (1) dynamically build an SSIS package in memory to match the new database schema using C#, or (2) have the extension attributes provided as name/value pairs so the file's schema does not change and can easily be loaded using SSIS. The problem with the first approach is the complexity of writing an awful lot of complex C# code. The problem of the second approach is that name/value pairs are useless to an OLAP cube; so they have to be pivoted back into a proper relational table somewhere in the data load process WITHOUT breaking SSIS. How this can be done will be part of future blog entry. What is involved in building an OLAP data mart? There are a great many steps involved in building OLAP data marts on-the-fly. The key point is that all the steps must be automated to allow for the production of multiple OLAP data marts per day (i.e. many thousands, each with its own specific data set and attributes). Now most of these steps have a great deal in common with standard data warehouse practices. The key difference is that the databases are all built to order. The only permanent database is the metadata database (shown in orange) which holds all the metadata needed to build everything else (i.e. client orders, configuration information, connection strings, client specific requirements and attributes etc.). The staging database (shown in red) has a short life: it is built, populated and then ripped down as soon as the OLAP Data Mart has been populated. In the diagram below, the OLAP data mart comprises the two blue components: the Data Mart which is a relational database and the OLAP Cube which is an OLAP database implemented using Microsoft Analysis Services (SSAS). The client may receive just the OLAP cube or both components together depending on their reporting requirements.  So, in broad terms the steps required to fulfil a client order are as follows: Step 1: Prepare metadata Create a set of database names unique to the client's order Modify all package connection strings to be used by SSIS to point to new databases and file locations. Step 2: Create relational databases Create the staging and data mart relational databases using dynamic SQL and set the database recovery mode to SIMPLE as we do not need the overhead of logging anything Execute SQL scripts to build all database objects (tables, views, functions and stored procedures) in the two databases Step 3: Load staging database Use SSIS to load all data files into the staging database in a parallel operation Load extension files containing name/value pairs. These will provide client-specific attributes in the OLAP cube. Step 4: Load data mart relational database Load the data from staging into the data mart relational database, again in parallel where possible Allocate surrogate keys and use SSIS to perform surrogate key lookup during the load of fact tables Step 5: Load extension tables & attributes Pivot the extension attributes from their native name/value pairs into proper relational tables Add the extension attributes to the views used by OLAP cube Step 6: Deploy & Process OLAP cube Deploy the OLAP database directly to the server using a C# script task in SSIS Modify the connection string used by the OLAP cube to point to the data mart relational database Modify the cube structure to add the extension attributes to both the data source view and the relevant dimensions Remove any standard attributes that not required Process the OLAP cube Step 7: Backup and drop databases Drop staging database as it is no longer required Backup data mart relational and OLAP database and ship these to the client's infrastructure Drop data mart relational and OLAP database from the build server Mark order complete Start processing the next order, ad infinitum. So my future blog posts and my forthcoming session at the SQLBits conference will all focus on some of the more interesting aspects of building OLAP data marts on-the-fly such as handling the load of extension attributes and how to dynamically alter the structure of an OLAP cube using C#.

    Read the article

  • SQL University: What and why of database refactoring

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 3 - Tools of the trade This is a second part of the series and in it we’ll take a look at what database refactoring is and why do it. Why refactor a database To know why refactor we first have to know what refactoring actually is. Code refactoring is a process where we change module internals in a way that does not change that module’s input/output behavior. For successful refactoring there is one crucial thing we absolutely must have: Tests. Automated unit tests are the only guarantee we have that we haven’t broken the input/output behavior before refactoring. If you haven’t go back ad read my post on the matter. Then start writing them. Next thing you need is a code module. Those are views, UDFs and stored procedures. By having direct table access we can kiss fast and sweet refactoring good bye. One more point to have a database abstraction layer. And no, ORM’s don’t fall into that category. But also know that refactoring is NOT adding new functionality to your code. Many have fallen into this trap. Don’t be one of them and resist the lure of the dark side. And it’s a strong lure. We developers in general love to add new stuff to our code, but hate fixing our own mistakes or changing existing code for no apparent reason. To be a good refactorer one needs discipline and focus. Now we know that refactoring is all about changing inner workings of existing code. This can be due to performance optimizations, changing internal code workflows or some other reason. This is a typical black box scenario to the outside world. If we upgrade the car engine it still has to drive on the road (preferably faster) and not fly (no matter how cool that would be). Also be aware that white box tests will break when we refactor. What to refactor in a database Refactoring databases doesn’t happen that often but when it does it can include a lot of stuff. Let us look at a few common cases. Adding or removing database schema objects Adding, removing or changing table columns in any way, adding constraints, keys, etc… All of these can be counted as internal changes not visible to the data consumer. But each of these carries a potential input/output behavior change. Dropping a column can result in views not working anymore or stored procedure logic crashing. Adding a unique constraint shows duplicated data that shouldn’t exist. Foreign keys break a truncate table command executed from an application that runs once a month. All these scenarios are very real and can happen. With the proper database abstraction layer fully covered with black box tests we can make sure something like that does not happen (hopefully at all). Changing physical structures Physical structures include heaps, indexes and partitions. We can pretty much add or remove those without changing the data returned by the database. But the performance can be affected. So here we use our performance tests. We do have them, right? Just by adding a single index we can achieve orders of magnitude performance improvement. Won’t that make users happy? But what if that index causes our write operations to crawl to a stop. again we have to test this. There are a lot of things to think about and have tests for. Without tests we can’t do successful refactoring! Fixing bad code We all have some bad code in our systems. We usually refer to that code as code smell as they violate good coding practices. Examples of such code smells are SQL injection, use of SELECT *, scalar UDFs or cursors, etc… Each of those is huge code smell and can result in major code changes. Take SELECT * from example. If we remove a column from a table the client using that SELECT * statement won’t have a clue about that until it runs. Then it will gracefully crash and burn. Not to mention the widely unknown SELECT * view refresh problem that Tomas LaRock (@SQLRockstar on Twitter) and Colin Stasiuk (@BenchmarkIT on Twitter) talk about in detail. Go read about it, it’s informative. Refactoring this includes replacing the * with column names and most likely change to application using the database. Breaking apart huge stored procedures Have you ever seen seen a stored procedure that was 2000 lines long? I have. It’s not pretty. It hurts the eyes and sucks the will to live the next 10 minutes. They are a maintenance nightmare and turn into things no one dares to touch. I’m willing to bet that 100% of time they don’t have a single test on them. Large stored procedures (and functions) are a clear sign that they contain business logic. General opinion on good database coding practices says that business logic has no business in the database. That’s the applications part. Refactoring such behemoths requires writing lots of edge case tests for the stored procedure input/output behavior and then start to refactor it. First we split the logic inside into smaller parts like new stored procedures and UDFs. Those then get called from the master stored procedure. Once we’ve successfully modularized the database code it’s best to transfer that logic into the applications consuming it. This only leaves the stored procedure with common data manipulation logic. Of course this isn’t always possible so having a plethora of performance and behavior unit tests is absolutely necessary to confirm we’ve actually improved the codebase in some way.   Refactoring is not a popular chore amongst developers or managers. The former don’t like fixing old code, the latter can’t see the financial benefit. Remember how we talked about being lousy at estimating future costs in the previous post? But there comes a time when it must be done. Hopefully I’ve given you some ideas how to get started. In the last post of the series we’ll take a look at the tools to use and an example of testing and refactoring.

    Read the article

  • ERP/CRM Systems. Desktop Based ? Web based?

    - by Parhs
    Hello guys... I have seen 2-3 ERPs in action. I am wondering what is better. Desktop based application or webbased displayed on a browser. My first expirience was with a web based ERP when i was 14 years old.. It was web based and terribly slow... For most simple task you had to do lots of clicks... no keyboard support ..... Pages took ages to load. Last year i worked for migrating to a newer computer some old terminal based cobol application. The computer that worked till today and still has no problem was from 1993. The user interface ofcourse was textbased.. The speed that guys placed orders was amazing! just typing the name of the customer , then 5-10 keys to add a product to order.... Comparing to this ERP the page for placing orders Link (click sales orders) seems terribly slow to add a product... No keyboard shortcut works to save what you added and generally i believe you need 4 times more time to place an order compared to the text interface... Having to use both mouse and keyboard for this task is BAD and sadistic... So how can tek heck these people ever use a system like that ??? So in the long run desktop application seems the only way... Ofcourse browsers support shortcuts but the way to overide the defaults that browsers uses isnt cross compatible... That is a hudge problem. Finnaly, if we MUST/forced use cloud in near future what about keyboard shortcuts?? I feel confused... I have seen converters of desktop applications to browser applications but are SLOW as hell... The question is what about user friendliness?What kind of application would you use?

    Read the article

  • Fujitsu Raku-Raku SmartPhone: Japanese Digital Seniors UX Insight from @debralilley

    - by ultan o'broin
    Super blog posting on the super-important subject of digital inclusion by Oracle partner Fujitsu appstech maven and Oracle Applications User Experience FXA-er and ACE Director Debra Lilley (@debralilley). Debra tells us how Fujitsu is enabling digital inclusion for older mobile users in Japan with their  Raku-Raku (??????. ????)smart phone: Fujitsu Raku-Raku - My UX Homework (Raku-Raku means easy or comfortable in Japanese). There are UX mobile, social media, and methodology takeaways there for us in Debra's blog. Fujitsu Raku-Raku Smartphone Demo  I encourage you to read Debra's blog. In it, she makes reference to a tailored social media experience for those digital seniors (???????) as they'd be called in Japan (UK and Ireland uses the term silver surfers). You can find that online experience here. Online Community site for Fujitsu Raku-Raku Smartphone Digital Seniors (English translation via Google Translate) It's an important reminder that UX is global sure, but also that worldwide accessibility and digital inclusion are priorities too for UX. It's vital that we understand such aspects of technology adoption and how the requirements of different categories of technology users can be met. Oracle is committed to providing the best possible user experience for enterprise users of all ages and abilities. That means talking with all sorts of people worldwide and understanding how and why they want to use our technology and what their context of use is. You can read more about Oracle's accessibility program on our corporate website. Proud to say I prompted a few questions in Japan all the way from Ireland. So, UX is not only global but you can drive UX research globally too without ever leaving home. Brilliant job, Debra. Here's to more such joint research creativity and UX collaborations worldwide between us. Wondering where we might go next? And what a fun way to do things too!

    Read the article

  • Correct architecture for running and stopping complex tasks in the background

    - by Phonon
    I'm having trouble working out the correct architecture for the following task. I have a GUI in Windows Forms that contains a ListBox, listing certain architectural layouts. One an item in this list is selected, a custom Control displays an interactive visualization of the selected layout. Drawing of this interactive diagram is a CPU-intensive task, and can take up to a second on my machine. The kind of functionality I'm trying to achieve is that if a user wants to quickly scroll through the layouts in the ListBox (say, holding down the down arrow key), I don't want my computer to sit there thinking about how to draw the layout before it allows the user to do anything else. The obvious answer is, of course, to run the layout calculations in a separate thread. But how do I make that thread return a whole control? How do I make sure I'm not running two layout calculations at once? I'm fairly new to this complex GUI business. So the real question is what is the right architecture to implement something like this? This seems like something people do all the time, but finding any suggestions on how to do it properly is really difficult.

    Read the article

  • ERP/CRM Systems. Desktop Based ? Web based? [closed]

    - by Parhs
    I have seen 2-3 ERPs in action. I am wondering what is better. Desktop based application or webbased displayed on a browser. My first experience was with a web based ERP when i was 14 years old.. It was web based and terribly slow... For most simple task you had to do lots of clicks... no keyboard support ..... Pages took ages to load. Last year I worked for migrating to a newer computer some old terminal based cobol application. The computer that worked till today and still has no problem was from 1993. The user interface ofcourse was textbased.. The speed that guys placed orders was amazing! just typing the name of the customer , then 5-10 keys to add a product to order.... Comparing to this ERP the page for placing orders Link (click sales orders) seems terribly slow to add a product... No keyboard shortcut works to save what you added and generally I believe you need 4 times more time to place an order compared to the text interface... Having to use both mouse and keyboard for this task is BAD and sadistic... So how can the heck these people ever use a system like that ??? So in the long run desktop application seems the only way... Of course browsers support shortcuts but the way to overide the defaults that browsers uses isn't cross compatible... That is a huge problem. Finnaly, if we MUST/forced use cloud in near future what about keyboard shortcuts?? I feel confused... I have seen converters of desktop applications to browser applications but are SLOW as hell... The question is what about user friendliness? What kind of application would you use?

    Read the article

  • Should Android and iPhone UI be different?

    - by Phonon
    I'm not completely new to developing apps, but I'm at a point where I'm trying to develop something and deploy it on several mobile platforms. To only concentrate on two major ones, suppose I'm developing an app for Android and iPhone and designing UI and the general user interaction architecture. Both platforms give guidelines as to how their UIs should work. For example, most iPhone apps have the Navigation Bar (the one that says Testing 1 and has a Back button) and an Icon Bar for navigating a program, while Android uses an Options Menu fetched via a Menu button and the "back" navigation is handled with the physical Back button on the device. I've seen many apps that try to force the same UI on every platform. For example, custom-building an iPhone style Icon Bar and putting it in their Android apps, but it just doesn't quite look right to me and it feels like it violates UI design guidelines somewhat. Are there any good design patters for implementing something sufficiently similar on both platforms, yet still platform-specific enough so that the user would not feel out of their comfort zone? What do people usually do in these situations?

    Read the article

  • Get a user's current IP address from Skype

    - by Jonathan.
    This is assuming that you (/police/ISP) can get the [rough] location of a laptop based on IP address. If your laptop is stolen, and the thief unwittingly connects it to the Internet, and you have Skype on the laptop could you get the public IP address of the laptop and then go to the police and get it tracked? Or activate the webcam to see the surroundings, but without having Skype ring/notify the user?

    Read the article

  • Opera user script to fill out some form fields

    - by STATUS_ACCESS_DENIED
    I'm looking for a user script that lets me fill out some form fields that are not covered by the Magic Wand in Opera. Alternately I could accept a solution that lets Opera accept other form fields with the Wand. To give you one example: when logging into any of the StackExchange sites, I need to manually enter (or enter from a note) the URL of the OpenID provider. I would like to automate this in particular plus several other sites where a similar situation exists.

    Read the article

  • Effect of HOME on libreoffice to convert to pdf as non-root user

    - by user1032531
    I installed libreoffice-headless and can convert documents when logged on as root. I then tried doing so as another user, and it didn't show an error, but didn't convert the file. I then found that if I get rid of the HOME=/tmp/ayb, it works with the other user. Doesn't HOME=/tmp/ayb just allow files to default to this directory if not specified? (Sorry, I tried to search "Linux HOME", but as you probably expect, received a bunch of non-relevant results). If not, what is the purpose of specifying HOME? Why does setting HOME prevent it from converting on non-root users? Note that /tmp and /tmp/ayb or both 0777. Thank you [root@desktop ~]# yum install libreoffice-headless [root@desktop ~]# yum install libreoffice-writer [root@desktop ~]# ls -l total 48 -rwxrwxrwx. 1 NotionCommotion NotionCommotion 48128 Jul 30 02:38 document_34.doc [root@desktop ~]# HOME=/tmp/ayb; /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc convert /tmp/ayb/document_34.doc -> /tmp/ayb/document_34.pdf using writer_pdf_Export [root@desktop ~]# rm d*.pdf rm: remove regular file `document_34.pdf'? y [root@desktop ~]# /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc convert /tmp/ayb/document_34.doc -> /tmp/ayb/document_34.pdf using writer_pdf_Export [root@desktop ~]# rm d*.pdf rm: remove regular file `document_34.pdf'? y [root@desktop ~]# su NotionCommotion sh-4.1$ HOME=/tmp/ayb; /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc sh-4.1$ rm d*.pdf rm: cannot remove `d*.pdf': No such file or directory sh-4.1$ /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc sh-4.1$ rm d*.pdf rm: cannot remove `d*.pdf': No such file or directory sh-4.1$ exit exit [root@desktop ~]# su NotionCommotion sh-4.1$ /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc convert /tmp/ayb/document_34.doc -> /tmp/ayb/document_34.pdf using writer_pdf_Export sh-4.1$ rm d*.pdf sh-4.1$ HOME=/tmp/ayb; /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc sh-4.1$ rm d*.pdf rm: cannot remove `d*.pdf': No such file or directory sh-4.1$ /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc sh-4.1$ rm d*.pdf rm: cannot remove `d*.pdf': No such file or directory sh-4.1$

    Read the article

  • Gnome-mount alternative in Ubuntu 10.04 or how to mount partition with normal user rights

    - by easyrider
    Hi, i was using gnome-mount to automount drives but in lucid it was removed. So is there any alternatives in lucid except editing fstab and programs that do so? Gnome-mount is a program which mounts disks using the same facilities as when mounting a disk as a normal user through Nautilus. There is no need to setup mountpoints or filesystems. This is particularly interesting if you want to use the automatically created mountpoints instead of manually specifying them for each disk.

    Read the article

  • Backup the Windows user folder in the cloud?

    - by Benjamin
    As I understand it, Google Drive and Dropbox, the two cloud storage providers I happen to know, can only sync a predefined folder that is created upon installation. I'd be happy to have an automated synchronisation of my folders in the cloud, but I'm not ready to change my habits, and start saving all my documents in the folder imposed by the provider. Is it possible with one of these, or any other you might know, to sync the full Windows user folder instead?

    Read the article

  • WSUS 3.0 SP2 installation fails at "configuring database" step.

    - by flashkube
    Attempting to install WSUS 3.0 SP2 on a Windows Server 2003 Enterprise system. I'm asking the setup to create a new database on one of our existing SQL Server 2005 systems. When the setup gets to the "configuring database" step it stops and throws "There is a problem with this Windows Installer package. A program run as part of the setup did not finish as expected. Contact your support personnel or package vendor." The two logs it suggests I look at are below. I'm not seeing any errors that mean anything to me. Any direction you can give will be greatly appreciated. WSUSSetup.log: 2009-12-04 15:26:21 Success MWUSSetup Validating pre-requisites... 2009-12-04 15:26:22 Error MWUSSetup Failed to determine if an higher version of WSUS is installed. Assuming it is not... (Error 0x80070002: The system cannot find the file specified.) 2009-12-04 15:26:28 Success MWUSSetup No SQL instances found 2009-12-04 15:26:42 Success MWUSSetup Initializing installation details 2009-12-04 15:26:42 Success MWUSSetup Installing ASP.Net 2009-12-04 15:27:24 Success MWUSSetup ASP.Net is installed successfully 2009-12-04 15:27:24 Success MWUSSetup Installing WSUS... 2009-12-04 15:27:28 Success CustomActions.Dll Unable to get INSTALL_LANGUAGE property, calculating it... 2009-12-04 15:27:28 Success CustomActions.Dll Successfully set propery of WSUS admin groups' full names 2009-12-04 15:27:29 Success CustomActions.Dll .Net framework path: C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727 2009-12-04 15:27:33 Success CustomActions.Dll Creating user group: WSUS Reporters with Description: WSUS Administrators who can only run reports on the Windows Server Update Services server. 2009-12-04 15:27:33 Success CustomActions.Dll Creating WSUS Reporters user group 2009-12-04 15:27:33 Success CustomActions.Dll WSUS Reporters user group already exists 2009-12-04 15:27:33 Success CustomActions.Dll Successfully created WSUS Reporters user group 2009-12-04 15:27:33 Success CustomActions.Dll Creating user group: WSUS Administrators with Description: WSUS Administrators can administer the Windows Server Update Services server. 2009-12-04 15:27:33 Success CustomActions.Dll Creating WSUS Administrators user group 2009-12-04 15:27:33 Success CustomActions.Dll WSUS Administrators user group already exists 2009-12-04 15:27:33 Success CustomActions.Dll Successfully created WSUS Administrators user group 2009-12-04 15:27:33 Success CustomActions.Dll Successfully created WSUS user groups 2009-12-04 15:27:33 Success CustomActions.Dll Succesfully set binary SID property 2009-12-04 15:27:33 Success CustomActions.Dll Succesfully set binary SID property 2009-12-04 15:27:33 Success CustomActions.Dll Successfully set binary SID properties 2009-12-04 15:28:50 Error MWUSSetup InstallWsus: MWUS Installation Failed (Error 0x80070643: Fatal error during installation.) 2009-12-04 15:28:50 Error MWUSSetup CInstallDriver::PerformSetup: WSUS installation failed (Error 0x80070643: Fatal error during installation.) 2009-12-04 15:28:50 Error MWUSSetup CSetupDriver::LaunchSetup: Setup failed (Error 0x80070643: Fatal error during installation.) From the end of WSUSSetupmsi_091204_1527.log MSI (s) (58:7C) [15:28:49:860]: Note: 1: 1708 MSI (s) (58:7C) [15:28:49:860]: Product: Windows Server Update Services 3.0 SP2 -- Installation failed. MSI (s) (58:7C) [15:28:49:875]: Cleaning up uninstalled install packages, if any exist MSI (s) (58:7C) [15:28:49:875]: MainEngineThread is returning 1603 MSI (s) (58:78) [15:28:49:985]: Destroying RemoteAPI object. MSI (s) (58:90) [15:28:49:985]: Custom Action Manager thread ending. === Logging stopped: 12/4/2009 15:28:49 === MSI (c) (30:54) [15:28:50:016]: Decrementing counter to disable shutdown. If counter = 0, shutdown will be denied. Counter after decrement: -1 MSI (c) (30:54) [15:28:50:016]: MainEngineThread is returning 1603 === Verbose logging stopped: 12/4/2009 15:28:50 ===

    Read the article

< Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >