Search Results

Search found 45150 results on 1806 pages for 'oracle technology database'.

Page 229/1806 | < Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >

  • Question regarding ideal database implementation for iPhone app

    - by Jeff
    So I have a question about the ideal setup for an app I am getting ready to build. The app is basically going to be a memorization tool and I already have an sqlite database full of content that I will be using for the app. The user will navigate through the contents of the database(using the uipickerview), and select something for memorization. If that row or cell of data is selected, it is put into a pool or a uitableview that is dedicated to showing which items you have in your "need to memorize" pool. When you go to that tableview, you can select the row, and the actual data would be populated. All information in the tableview would be deletable, in the event that they don't want it there anymore... Thats it. I know that with database interfacing, there are a few different options out there, in this particular setup, is core data the easiest approach? Is there any other way that would be better? I am just kind of looking for a point in the right direction, any help is greatly appreciated!!

    Read the article

  • Database abstraction/adapters for ruby

    - by Stiivi
    What are the database abstractions/adapters you are using in Ruby? I am mainly interested in data oriented features, not in those with object mapping (like active record or data mapper). I am currently using Sequel. Are there any other options? I am mostly interested in: simple, clean and non-ambiguous API data selection (obviously), filtering and aggregation raw value selection without field mapping: SELECT col1, col2, col3 = [val1, val2, val3] not hash of { :col1 = val1 ...} API takes into account table schemas 'some_schema.some_table' in a consistent (and working) way; also reflection for this (get schema from table) database reflection: get list of table columns, their database storage types and perhaps adaptor's abstracted types table creation, deletion be able to work with other tables (insert, update) in a loop enumerating selection from another table without requiring to fetch all records from table being enumerated Purpose is to manipulate data with unknown structure at the time of writing code, which is the opposite to object mapping where structure or most of the structure is usually well known. I do not need the object mapping overhead. What are the options, including back-ends for object-mapping libraries?

    Read the article

  • Database schema for Product Properties

    - by Chemosh
    As so many people I'm looking for a Products /Product Properties database schema. I'm using Ruby on Rails and (Thinking) Sphinx for faceted searches. Requirements: Adding new product types and their options should not require a change to the database schema Support faceted searches using Sphinx. Solutions I've come across: (See Bill Karwin's answer) Option 1: Single Table Inheritance Not an option really. The table would contain way to many columns. Option 2: Class Table Inheritance Ruby on Rails caches the database schema on start-up which means a restart whenever a new type of product is introduced. If you have a size able product catalog this could mean hundreds of tables. Option 3: Serialized LOB Kills being able to do faceted searches without heavy application logic. Option 4: Entity-Attribute-Value For testing purposes, EAV worked fine. However it could quickly become a mess and a maintenance hell as you add more and more options (e.g. when an option increase the prices or delivery time). What option should I go with? What other solutions are out there? Is there a silver bullet (ha) I overlooked?

    Read the article

  • Database for managing large volumes of (system) metrics

    - by symcbean
    Hi, I'm looking at building a system for managing and reporting stats on web page performance. I'll be collecting a lot more stats than are available in the standard log formats (approx 20 metrics) but compared to most types of database applications, the base data structure will be very simple. My problem is that I'll be accumulating a lot of data - in the region of 100,000 records (i.e. sets of metrics) per hour. Of course, resources are very limited! So that its possible to sensibly interact with the data, I'd need to consolidate each metric into one minute bins, broken down by URL, then for anything more than 1 day old, consolidated into 10 minute bins, then at 1 week, hourly bins. At the front end, I want to provide a view (prefereably as plots) of the last hour of data, with the facility for users to drill up/down through defined hierarchies of URLs (which do not always map directly to the hierarchy expressed in the path of the URL) and to view different time frames. Rather than coding all this myself and using a relational database, I was wondering if there were tools available which would facilitate both the management of the data and the reporting. I had a look at Mondrian however I can't see from the documentation I've looked at whether it's possible to drop the more granular information while maintaining the consolidated views of the data. RRDTool looks promising in terms of managing the data consolidation, but seems to be rather limited in terms of querying the dataset as a multi-dimensional/relational database. What else whould I be looking at?

    Read the article

  • Flexible forms and supporting database structure

    - by sunwukung
    I have been tasked with creating an application that allows administrators to alter the content of the user input form (i.e. add arbitrary fields) - the contents of which get stored in a database. Think Modx/Wordpress/Expression Engine template variables. The approach I've been looking at is implementing concrete tables where the specification is consistent (i.e. user profiles, user content etc) and some generic field data tables (i.e. text, boolean) to store non-specific values. Forms (and model fields) would be generated by first querying the tables and retrieving the relevant columns - although I've yet to think about how I would setup validation. I've taken a look at this problem, and it seems to be indicating an EAV type approach - which, from my brief research - looks like it could be a greater burden than the blessings it's flexibility would bring. I've read a couple of posts here, however, which suggest this is a dangerous route: How to design a generic database whose layout may change over time? Dynamic Database Schema I'd appreciate some advice on this matter if anyone has some to give regards SWK

    Read the article

  • Best practice how to store HTML in a database column

    - by tbrandao
    I have an application that modifies a table dynamically, think spreadsheet), then upon saving the form (which the table is part of) ,I store that changed table (with user modifications) in a database column named html_Spreadhseet,along with the rest of the form data. right now I'm just storing the html in a plain text format with basic escaping of characters... I'm aware that this could be stored as a separate file, the source table (html_workseeet) already is. But from a data handling perspective its easier to save the changed html table to and from a column so as to avoid having to come up with a file management strategy (which folder will this live in, now must include folder in backups, security issues now need to apply to files, how to sync db security with file system etc.), so to minimize these issues I'm only storing the ... part in the database column. My question is should I gzip the HTML , maybe use JSON, or some other format to easily store and retrieve the HTML from the database column, what is the best practice to store HTML content in a datbase? Or just store it as I currently am as an escaped text column?

    Read the article

  • Database Structure for CakePHP Models

    - by Michael T. Smith
    We're building a data tracking web app using CakePHP, and I'm having some issues getting the database structure right. We have Companies that haveMany Sites. Sites haveMany DataSamples. Tags haveAndBelongToMany Sites. That is all set up fine. The problem is "ranking" the sites within tags. We need to store it in the database as an archive. I created a Rank model that is setup like this: rank ( id (int), sample_id (int), tag_id (int), site_id (int), rank (int), total_rows) ) So, the question is, how do I create the associations for tag, site and sample to rank? I originally set them as haveMany. But the returned structures don't get me where I'd like to be. It looks like: [Site] => Array ( [Sample] = Array(), [Tag] = Array() ) When I'm really looking for: [Site] => Array ( [Tag] = Array ( [Sample] => Array ( [Rank] => Array ( ...data... ) ) ) ) I think that I may not be structuring the database properly; so if I need to update please let me know. Otherwise, how do I write a find query that gets me where I need to be? Thanks! Thoughts? Need more details? Just ask!

    Read the article

  • Execute a block of database querys

    - by Nightmare
    I have the following task to complete: In my program a have a block of database querys or questions. I want to execute these questions and wait for the result of all questions or catch an error if one question fails! My Question object looks like this (simplified): public class DbQuestion(String sql) { [...] } [...] //The answer is just a holder for custom data... public void SetAnswer(DbAnswer answer) { //Store the answer in the question and fire a event to the listeners this.OnAnswered(EventArgs.Empty); } [...] public void SetError() { //Signal an Error in this query! this.OnError(EventArgs.Empty); } So every question fired to the database has a listener that waits for the parsed result. Now I want to fire some questions asynchronous to the database (max. 5 or so) and fire an event with the data from all questions or an error if only one question throws one! Which is the best or a good way to accomplish this task? Can I really execute more then one question parallel and stop all my work when one question throws an error? I think I need some inspiration on this... Just a note: I´m working with .NET framework 2.0

    Read the article

  • Seperating and counting CSV entries from database (Access/ASp Classic)

    - by Katherine Perotta
    hey i could really use some help with this one. I have a faq with multiple "tags" and I would like to separate and count them. They are currently in the database as follows: ID-----------------TITLE--------------CONTENT-----------TAGS Sample Records: 1---------------sampletitle 1---------amplecontent--------tag1,tag2,tag3 2---------------sampletitle 2---------moresamplestuff-----tag3,tag4,tag5 How could I go about counting the number of times each tag is used? In the end, would it be easier to just create a separate table called TAGS, with a single tag corresponding to a single ID in FAQ? The only reason I don't prefer doing something like that is because I have so much data already it would take quite a while. However, if there's no alternative or if its easier than doing string parsing like that, im willing to do it. The goal is to display each unique tag and the number of times it is used. Would it be better to do the heavy lifting in the database or ASP? I have gotten as far as getting a list of all tags and displaying them in an array (with each tag separated). So at this point what I need to do is count each value and then remove the duplicates (while preserving the count number somewhere). This is in ASP classic using an Access database. Thanks!

    Read the article

  • How to update a string property of an sqlite database item

    - by Thomas Joos
    hi all, I'm trying to write an application that checks if there is an active internet connection. If so it reads an xml and checks every 'lastupdated' item ( php generated string ). It compares it to the database items and if there is a new value, this particular item needs to be updated. My code seems to work ( compiles, no error messages, no failures, .. ) but I notice that the particular property does not change, it becomese (null). When I output the binded string value it returns the correct string value I want to update into the db.. Any idea what I'm doing wrong? const char *sql = "update myTable Set last_updated=? Where node_id =?"; sqlite3_stmt *statement; // Preparing a statement compiles the SQL query into a byte-code program in the SQLite library. // The third parameter is either the length of the SQL string or -1 to read up to the first null terminator. if (sqlite3_prepare_v2(database, sql, -1, &statement, NULL) == SQLITE_OK){ NSLog(@"last updated item: %@", [d lastupdated]); sqlite3_bind_text(statement, 1, [d lastupdated],-1,SQLITE_TRANSIENT); sqlite3_bind_int (statement, 2, [d node_id]); }else { NSLog(@"SQLite statement error!"); } if(SQLITE_DONE != sqlite3_step(statement)){ NSAssert1(0, @"Error while updating. '%s'", sqlite3_errmsg(database)); }else { NSLog(@"SQLite Update done!"); }

    Read the article

  • Database solution for 200million writes/day, monthly summarization queries

    - by sb
    Hello. I'm looking for help deciding on which database system to use. (I've been googling and reading for the past few hours; it now seems worthwhile to ask for help from someone with firsthand knowledge.) I need to log around 200 million rows (or more) per 8 hour workday to a database, then perform weekly/monthly/yearly summary queries on that data. The summary queries would be for collecting data for things like billing statements, eg. "How many transactions of type A did each user run this month?" (could be more complex, but that's the general idea). I can spread the database amongst several machines, as necessary, but I don't think I can take old data offline. I'll definitely need to be able to query a month's worth of data, maybe a year. These queries would be for my own use, and wouldn't need to be generated in real-time for an end-user (they could run overnight, if needed). Does anyone have any suggestions as to which databases would be a good fit? P.S. Cassandra looks like it would have no problem handling the writes, but what about the huge monthly table scans? Is anyone familiar with Cassandra/Hadoop MapReduce performance?

    Read the article

  • Database Tutorial: The method open() is undefined for the type MainActivity.DBAdapter

    - by user2203633
    I am trying to do this database tutorial on SQLite Eclipse: https://www.youtube.com/watch?v=j-IV87qQ00M But I get a few errors at the end.. at db.ppen(); i get error: The method open() is undefined for the type MainActivity.DBAdapter and similar for insert record and close. MainActivity: package com.example.studentdatabase; import java.io.File; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import android.app.Activity; import android.app.ListActivity; import android.content.Intent; import android.database.Cursor; import android.os.Bundle; import android.util.Log; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.BaseAdapter; import android.widget.Button; import android.widget.Toast; public class MainActivity extends Activity { /** Called when the activity is first created. */ //DBAdapter db = new DBAdapter(this); @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); Button addBtn = (Button)findViewById(R.id.add); addBtn.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Intent i = new Intent(MainActivity.this, addassignment.class); startActivity(i); } }); try { String destPath = "/data/data/" + getPackageName() + "/databases/AssignmentDB"; File f = new File(destPath); if (!f.exists()) { CopyDB( getBaseContext().getAssets().open("mydb"), new FileOutputStream(destPath)); } } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } DBAdapter db = new DBAdapter(); //---add an assignment--- db.open(); long id = db.insertRecord("Hello World", "2/18/2012", "DPR 224", "First Android Project"); id = db.insertRecord("Workbook Exercises", "3/1/2012", "MAT 100", "Do odd numbers"); db.close(); //---get all Records--- /* db.open(); Cursor c = db.getAllRecords(); if (c.moveToFirst()) { do { DisplayRecord(c); } while (c.moveToNext()); } db.close(); */ /* //---get a Record--- db.open(); Cursor c = db.getRecord(2); if (c.moveToFirst()) DisplayRecord(c); else Toast.makeText(this, "No Assignments found", Toast.LENGTH_LONG).show(); db.close(); */ //---update Record--- /* db.open(); if (db.updateRecord(1, "Hello Android", "2/19/2012", "DPR 224", "First Android Project")) Toast.makeText(this, "Update successful.", Toast.LENGTH_LONG).show(); else Toast.makeText(this, "Update failed.", Toast.LENGTH_LONG).show(); db.close(); */ /* //---delete a Record--- db.open(); if (db.deleteRecord(1)) Toast.makeText(this, "Delete successful.", Toast.LENGTH_LONG).show(); else Toast.makeText(this, "Delete failed.", Toast.LENGTH_LONG).show(); db.close(); */ } private class DBAdapter extends BaseAdapter { private LayoutInflater mInflater; //private ArrayList<> @Override public int getCount() { return 0; } @Override public Object getItem(int arg0) { return null; } @Override public long getItemId(int arg0) { return 0; } @Override public View getView(int arg0, View arg1, ViewGroup arg2) { return null; } } public void CopyDB(InputStream inputStream, OutputStream outputStream) throws IOException { //---copy 1K bytes at a time--- byte[] buffer = new byte[1024]; int length; while ((length = inputStream.read(buffer)) > 0) { outputStream.write(buffer, 0, length); } inputStream.close(); outputStream.close(); } public void DisplayRecord(Cursor c) { Toast.makeText(this, "id: " + c.getString(0) + "\n" + "Title: " + c.getString(1) + "\n" + "Due Date: " + c.getString(2), Toast.LENGTH_SHORT).show(); } public void addAssignment(View view) { Intent i = new Intent("com.pinchtapzoom.addassignment"); startActivity(i); Log.d("TAG", "Clicked"); } } DBAdapter code: package com.example.studentdatabase; public class DBAdapter { public static final String KEY_ROWID = "id"; public static final String KEY_TITLE = "title"; public static final String KEY_DUEDATE = "duedate"; public static final String KEY_COURSE = "course"; public static final String KEY_NOTES = "notes"; private static final String TAG = "DBAdapter"; private static final String DATABASE_NAME = "AssignmentsDB"; private static final String DATABASE_TABLE = "assignments"; private static final int DATABASE_VERSION = 2; private static final String DATABASE_CREATE = "create table if not exists assignments (id integer primary key autoincrement, " + "title VARCHAR not null, duedate date, course VARCHAR, notes VARCHAR );"; private final Context context; private DatabaseHelper DBHelper; private SQLiteDatabase db; public DBAdapter(Context ctx) { this.context = ctx; DBHelper = new DatabaseHelper(context); } private static class DatabaseHelper extends SQLiteOpenHelper { DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { try { db.execSQL(DATABASE_CREATE); } catch (SQLException e) { e.printStackTrace(); } } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { Log.w(TAG, "Upgrading database from version " + oldVersion + " to " + newVersion + ", which will destroy all old data"); db.execSQL("DROP TABLE IF EXISTS contacts"); onCreate(db); } } //---opens the database--- public DBAdapter open() throws SQLException { db = DBHelper.getWritableDatabase(); return this; } //---closes the database--- public void close() { DBHelper.close(); } //---insert a record into the database--- public long insertRecord(String title, String duedate, String course, String notes) { ContentValues initialValues = new ContentValues(); initialValues.put(KEY_TITLE, title); initialValues.put(KEY_DUEDATE, duedate); initialValues.put(KEY_COURSE, course); initialValues.put(KEY_NOTES, notes); return db.insert(DATABASE_TABLE, null, initialValues); } //---deletes a particular record--- public boolean deleteContact(long rowId) { return db.delete(DATABASE_TABLE, KEY_ROWID + "=" + rowId, null) > 0; } //---retrieves all the records--- public Cursor getAllRecords() { return db.query(DATABASE_TABLE, new String[] {KEY_ROWID, KEY_TITLE, KEY_DUEDATE, KEY_COURSE, KEY_NOTES}, null, null, null, null, null); } //---retrieves a particular record--- public Cursor getRecord(long rowId) throws SQLException { Cursor mCursor = db.query(true, DATABASE_TABLE, new String[] {KEY_ROWID, KEY_TITLE, KEY_DUEDATE, KEY_COURSE, KEY_NOTES}, KEY_ROWID + "=" + rowId, null, null, null, null, null); if (mCursor != null) { mCursor.moveToFirst(); } return mCursor; } //---updates a record--- public boolean updateRecord(long rowId, String title, String duedate, String course, String notes) { ContentValues args = new ContentValues(); args.put(KEY_TITLE, title); args.put(KEY_DUEDATE, duedate); args.put(KEY_COURSE, course); args.put(KEY_NOTES, notes); return db.update(DATABASE_TABLE, args, KEY_ROWID + "=" + rowId, null) > 0; } } addassignment code: package com.example.studentdatabase; public class addassignment extends Activity { DBAdapter db = new DBAdapter(this); @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.add); } public void addAssignment(View v) { Log.d("test", "adding"); //get data from form EditText nameTxt = (EditText)findViewById(R.id.editTitle); EditText dateTxt = (EditText)findViewById(R.id.editDuedate); EditText courseTxt = (EditText)findViewById(R.id.editCourse); EditText notesTxt = (EditText)findViewById(R.id.editNotes); db.open(); long id = db.insertRecord(nameTxt.getText().toString(), dateTxt.getText().toString(), courseTxt.getText().toString(), notesTxt.getText().toString()); db.close(); nameTxt.setText(""); dateTxt.setText(""); courseTxt.setText(""); notesTxt.setText(""); Toast.makeText(addassignment.this,"Assignment Added", Toast.LENGTH_LONG).show(); } public void viewAssignments(View v) { Intent i = new Intent(this, MainActivity.class); startActivity(i); } } What is wrong here? Thanks in advance.

    Read the article

  • Managing User & Role Security with Oracle SQL Developer

    - by thatjeffsmith
    With the advent of SQL Developer v3.0, users have had access to some powerful database administration features. Version 3.1 introduced more powerful features such as an interface to Data Pump and RMAN. Today I want to talk about some very simple but frequently ran tasks that SQL Developer can assist with, like: identifying privs granted to users managing role privs assigning new roles and privs to users & roles Before getting started, you’ll need a connection to the database with the proper privileges. The common ROLE used to accomplish this is the ‘DBA‘ role. Curious as to what the DBA role is actually comprised of? Let’s find out! Open the DBA Console First make sure you’re connected to the database you want to manage security on with a privileged administrator account. Then open the View menu and select ‘DBA.’ Accessing the DBA panel ‘Create’ a Connection Click on the green ‘+’ button in the DBA panel. It will ask you to choose a previously defined SQL Developer connection. Defining a DBA connection in Oracle SQL Developer Once connected you will see a tree list of DBA features you can start interacting with. Expand the ‘Security’ Tree Node As you click on an object in the DBA panel, the ‘viewer’ will open on the right-hand-side, just like you are accustomed to seeing when clicking on a table or stored procedure. Accessing the DBA role If I’m a newly hired Oracle DBA, the first thing I might want to do is become very familiar with the DBA role. People will be asking you to grant them this role or a subset of its privileges. Once you see what the role can do, you will become VERY protective of it. My favorite 3-letter 4-letter word is ‘ANY’ and the DBA role is littered with privileges like this: ANY TABLE privs granted to DBA role So if this doesn’t freak you out, then maybe you should re-consider your career path. Or in other words, don’t be granting this role to ANYONE you don’t completely trust to take care of your database. If I’m just assigned a new database to manage, the first thing I might want to look at is just WHO has been assigned the DBA role. SQL Developer makes this easy to ascertain, just click on the ‘User Grantees’ panel. Who has the keys to your car? Making Changes to Roles and Users If you mouse-right-click on a user in the Tree, you can do individual tasks like grant a sys priv or expire an account. But, you can also use the ‘Edit User’ dialog to do a lot of work in one pass. As you click through options in these dialogs, it will build the ‘ALTER USER’ script in the SQL panel, which can then be executed or copied to the worksheet or to your .SQL file to be ran at your discretion. A Few Clicks vs a Lot of Typing These dialogs won’t make you a DBA, but if you’re pressed for time and you’re already in SQL Developer, they can sure help you make up for lost time in just a few clicks!

    Read the article

  • Simplifying Human Capital Management with Mobile Applications

    - by HCM-Oracle
    By Aaron Green If you're starting to think 'mobility' is a recurring theme in your reading, you'd be right. For those who haven't started to build organisational capabilities to leverage it, it's fair to say you're late to the party. The good news: better late than never. Research firm eMarketer says the worldwide smartphone audience will total 1.75 billion this year, while communications technology and services provider Ericsson suggests smartphones will triple to 5.6 billion globally by 2019. It should be no surprise, smart phone adoption is reaching the farthest corners of the globe; the subsequent impact of enterprise applications enabled by these devices is driving business performance improvement and will continue to do so. Companies using advanced workforce analytics can add significantly to the bottom line, while impacting customer satisfaction, quality and productivity. It's a statement that makes most business leaders sit forward in their chairs. Achieving these three standards is like sipping The Golden Elixir for the business world. No-one would argue their importance. So what are 'advanced workforce analytics?' Simply, they're unprecedented access to workforce trends and performance markers. Many are made possible by a mobile world and the enterprise applications that come with it on smart devices. Some refer to it as 'the consumerisation of IT'. As this phenomenon has matured and become more widely appreciated it has impacted the spectrum of functional units within an enterprise differently, but powerfully. Whether it's sales, HR, marketing, IT, or operations, all have benefited from a more mobile approach. It has been the catalyst for improvement in, and management of, the employee experience. The net result of which is happier customers. The obvious benefits but the lesser realised impact Most people understand that mobility allows for greater efficiency and productivity, collaboration and flexibility, but how that translates into business outcomes within the various functional groups is lesser known. In actuality mobility has helped galvanise partnerships between cross-functional groups within the enterprise. Where in some quarters it was once feared mobility could fragment a workforce, its rallying cry of support is coming from what you might describe as an unlikely source - HR. As the bedrock of an enterprise, it is conceivable HR might contemplate the possible negative impact of a mobile workforce that no-longer sits in an office, at the same desks every day. After all, who would know what they were doing or saying? How would they collaborate? It's reasonable to see why HR might have a legitimate claim to try and retain as much 'perceived control' as possible. The reality however is mobility has emancipated human capital and its management. Mobility and enterprise applications are expediting decision making. Google calls it Zero Moment of Truth, or ZMOT. It enables smoother operation and can contribute to faster growth. From a collaborative perspective, with the growing use of enterprise social media, which in many cases is being driven by HR, workforce planning and the tangible impact of change is much easier to map. This in turn provides a platform from which individuals and teams can thrive. With more agility and ability to anticipate, staff satisfaction and retention is higher, and real time feedback constant. The management team can save time, energy and costs with more accurate data, which is then intelligently applied across the workforce to truly engage with staff, customers and partners. From a human capital management (HCM) perspective, mobility can help you close the loop on true talent management. It can enhance what managers can offer and what employees can provide in return. It can create nested relationships and powerful partnerships. IT and HR - partners and stewards of mobility One effect of enterprise mobility is an evolution in the nature of the relationship between HR and IT from one of service provision to partnership. The reason for the dynamic shift is largely due to the 'bring your own device' (BYOD) movement, which is transitioning to a 'bring your own application' (BYOA) scenario. As enterprise technology has in some ways reverse-engineered its solutions to help manage this situation, the partnership between IT (the functional owner) and HR (the strategic enabler) is deeply entrenched. And it has to be. The CIO and the HR leader are faced with compliance and regulatory issues and concerns around information security and personal privacy on a daily basis, complicated by global reach and varied domestic legislation. There are tens of thousands of new mobile apps entering the market each month and, unlike many consumer applications which get downloaded but are often never opened again after initial perusal, enterprise applications are being relied upon by functional groups, not least by HR to enhance people management. It requires a systematic approach across all applications in use within the enterprise in order to ensure they're used to best effect. No turning back, and no desire to With real time analytics on performance and the ability for immediate feedback, there is no turning back for managers. In my experience with Oracle, our customers' operational efficiency is at record levels. It's clear as a result of the combination of individual KPIs and organisational goals, CIOs have been able to give HR leaders the ability to build predictive models that feed into an enterprise organisations' evolving strategy. It also helps them ensure regulatory compliance much more easily. Once an arduous task, with mobile enabled automation and quality data, compliance is simpler. Their world has changed for the better. For the CIO, mobility also assists them to optimise performance. While it doesn't come without challenges, mobile-enabled applications and the native experience users have with them means employees don't need high-level technical expertise to train users. It reduces the training and engagement required from the IT team so they can focus on other things that deliver value to the bottom line; all the while lowering the cost of assets and related maintenance work by simplifying processes. Rewards of a mobile enterprise outweigh risks With mobile tools allowing us to increasingly integrate our personal and professional lives, terms like "office hours" are becoming irrelevant, so work/life balance is a cultural must. Enterprises are expected to offer tools that enable workers to access information from anywhere, at any time, from any device. Employees want simplicity and convenience but it doesn't stop at private enterprise. This is a societal shift. Governments, which traditionally have been known to be slower to adopt newer technology, are also offering support for local businesses to go mobile. Several state government websites have advice on how to create mobile apps and more. And as recently as last week the Victorian Minister for Technology Gordon Rich-Phillips unveiled his State government's ICT roadmap for the next two years, which details an increased use of the public cloud, as well as mobile communications, and improved access to online data-sets. Tech giants are investing significantly in solutions designed to simplify mobile deployment and enablement. The mobility trend is creating a wave of change in the industry and driving transformation in the enterprise. If you're not on that wave, the business risk continues to rise as your competitiveness drops. Aaron is the Vice President of HCM Strategy at Oracle Corporation where he is responsible for researching and identifying emerging trends in the practice of Human Resources and works to deliver industry-leading technology solutions. Other responsibilities include, ownership of Oracle's innovative HCM solutions across JAPAC and enabling organisations to transform and modernise their workforce tools. Follow him on Twitter @aaronjgreen

    Read the article

  • Should I partition my main table with 2 millions rows?

    - by domribaut
    Hi, I am a developer and would need some DBA-advices. We are starting to get performance problem with a MSSQL2005 database. The visible effects of the incidents is mainly CPU-hog on the server but operations reported that it was also draining resources from the SAN (not always). the main source of issues is for sure in some application but I am wondering if we should partition some of the main tables anyway in order to relax the I/O pressure. The base is about 60GB in one file. The main table (order) has 2.1 Million rows with a 215 colones (but none is huge). We have an integer as PK so it should be OK to define a partition function. Will we win something with partitioning? will partition indexes buy us something? Here are some more facts about the DB and the table database_name database_size unallocated space My_base 57173.06 MB 79.74 MB reserved data index_size unused 29 444 808 KB 26 577 320 KB 2 845 232 KB 22 256 KB name rows reserved data index_size unused Order 2 097 626 4 403 832 KB 2 756 064 KB 1 646 080 KB 1688 KB Thanks for any advice Dom

    Read the article

  • logfile deleted on Oracle database how to re-create it?

    - by Daniel
    for my database assignment we were looking into 'database corruption' and I was asked to delete the second redo log file which I have done with the command: rm log02a.rdo this was in the $HOME/ORADATA/u03 directory. Now I started up my database using startup pfile=$PFILE nomount then I mounted it using the command alter database mount; now when I try to open it alter database open; it gives me the error: ORA-03113: end-of-file on communication channel Process ID: 22125 Session ID: 25 Serial number: 1 I am assuming this is because the second redo log file is missing. There is still log01a.rdo, but not the one I have deleted. How can I go about recovering this now so that I can open my database again? I have looked into the database created scripts, and it specified the log02a.rdo file to be size 10M and part of group 2. If I do select group#, member from v$logfile; I get: 1 /oradata/student_db/user06/ORADATA/u03/log01a.rdo 2 /oradata/student_db/user06/ORADATA/u03/log02a.rdo 3 /oradata/student_db/user06/ORADATA/u03/log03a.rdo 4 /oradata/student_db/user06/ORADATA/u03/log04a.rdo So it is part of group 2. If I try to add the log02a.rdo file again "already part of the database". If I drop group 2 and then add it again with these commands: ALTER DATABASE ADD LOGFILE GROUP 2 ('$HOME/ORADATA/u03/log02a.rdo') SIZE 10M; Nothing. Supposedly alters the database, but it still won't start up. Any ideas what I can do to re-create this and be able to open my database again?

    Read the article

  • Entity Framework and multi-tenancy database design

    - by Junto
    I am looking at multi-tenancy database schema design for an SaaS concept. It will be ASP.NET MVC - EF, but that isn't so important. Below you can see an example database schema (the Tenant being the Company). The CompanyId is replicated throughout the schema and the primary key has been placed on both the natural key, plus the tenant Id. Plugging this schema into the Entity Framework gives the following errors when I add the tables into the Entity Model file (Model1.edmx): The relationship 'FK_Order_Customer' uses the set of foreign keys '{CustomerId, CompanyId}' that are partially contained in the set of primary keys '{OrderId, CompanyId}' of the table 'Order'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The relationship 'FK_OrderLine_Customer' uses the set of foreign keys '{CustomerId, CompanyId}' that are partially contained in the set of primary keys '{OrderLineId, CompanyId}' of the table 'OrderLine'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The relationship 'FK_OrderLine_Order' uses the set of foreign keys '{OrderId, CompanyId}' that are partially contained in the set of primary keys '{OrderLineId, CompanyId}' of the table 'OrderLine'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The relationship 'FK_Order_Customer' uses the set of foreign keys '{CustomerId, CompanyId}' that are partially contained in the set of primary keys '{OrderId, CompanyId}' of the table 'Order'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The relationship 'FK_OrderLine_Customer' uses the set of foreign keys '{CustomerId, CompanyId}' that are partially contained in the set of primary keys '{OrderLineId, CompanyId}' of the table 'OrderLine'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The relationship 'FK_OrderLine_Order' uses the set of foreign keys '{OrderId, CompanyId}' that are partially contained in the set of primary keys '{OrderLineId, CompanyId}' of the table 'OrderLine'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The relationship 'FK_OrderLine_Product' uses the set of foreign keys '{ProductId, CompanyId}' that are partially contained in the set of primary keys '{OrderLineId, CompanyId}' of the table 'OrderLine'. The set of foreign keys must be fully contained in the set of primary keys, or fully not contained in the set of primary keys to be mapped to a model. The question is in two parts: Is my database design incorrect? Should I refrain from these compound primary keys? I'm questioning my sanity regarding the fundamental schema design (frazzled brain syndrome). Please feel free to suggest the 'idealized' schema. Alternatively, if the database design is correct, then is EF unable to match the keys because it perceives these foreign keys as a potential mis-configured 1:1 relationships (incorrectly)? In which case, is this an EF bug and how can I work around it?

    Read the article

  • Pulling record from mySQL database only working for userid and not email

    - by user2908467
    This function works because I search by userid: private void showList_Click(object sender, EventArgs e) { int id = 0; for (int i = 0; i <= sqlClient.Count("UserList"); i++) { Dictionary<string, string> dik = sqlClient.Select("UserList", "userid = " + id); var lines = dik.Select(kv => kv.Key + ": " + kv.Value.ToString()); userList.AppendText(string.Join(Environment.NewLine, lines)); userList.AppendText(Environment.NewLine); userList.AppendText("--------------------------------------"); id++; } } This function does not work because I search by email: private void login_Click(object sender, EventArgs e) { string email = lemail.Text; Dictionary<string, string> dik = sqlClient.Select("UserList", "firstname = " + email); var lines = dik.Select(kv => kv.Key + ": " + kv.Value.ToString()); logged.AppendText(string.Join(Environment.NewLine, lines)); } This is the error message I receive when I click on the login button: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '@aol.com' at line 1 The email I searched for in the database was "[email protected]" without quotes. I'm lead to believe by the error message the @ sign is causing conflict as I know it is a special character but I am having a hard time figuring out what phrase to search for to help me. Also, here is the function that is being called: public Dictionary<string, string> Select(string table, string WHERE) { //This methods selects from the database, it retrieves data from it. //You must make a dictionary to use this since it both saves the column //and the value. i.e. "age" and "33" so you can easily search for values. //Example: SELECT * FROM names WHERE name='John Smith' // This example would retrieve all data about the entry with the name "John Smith" //Code = Dictionary<string, string> myDictionary = Select("names", "name='John Smith'"); //This code creates a dictionary and fills it with info from the database. string query = "SELECT * FROM " + table + " WHERE " + WHERE + ""; Dictionary<string, string> selectResult = new Dictionary<string, string>(); if (this.Open()) { MySqlCommand cmd = new MySqlCommand(query, conn); MySqlDataReader dataReader = cmd.ExecuteReader(); try { while (dataReader.Read()) { for (int i = 0; i < dataReader.FieldCount; i++) { selectResult.Add(dataReader.GetName(i).ToString(), dataReader.GetValue(i).ToString()); } } dataReader.Close(); } catch { } this.Close(); return selectResult; } else { return selectResult; } } My database table is called "UserList" The fields in order are as follows: userid, email, password, lastname, firstname Any help would be greatly appreciated. This site is amazing!

    Read the article

  • Fastest way to move records from a oracle DB into MS sql server after processing

    - by user347748
    Hi.. Ok this is the scenario...I have a table in Oracle that acts like a queue... A VB.net program reads the queue and calls a stored proc in MS SQL Server that processes and then inserts the message into another SQL server table and then deletes the record from the oracle table. We use a datareader to read the records from Oracle and then call the stored proc for each of the records. The program seems to be a little slow. The stored procedure itself isnt slow. The SP by itself when called in a loop can process about 2000 records in 20 seconds. BUt when called from the .Net program, the execution time is about 5 records per second. I have seen that most of the time consumed is in calling the stored procedure and waiting for it to return. Is there a better way of doing this? Here is a snippet of the actual code Function StartDataXfer() As Boolean Dim status As Boolean = False Try SqlConn.Open() OraConn.Open() c.ErrorLog(Now.ToString & "--Going to Get the messages from oracle", 1) If GetMsgsFromOracle() Then c.ErrorLog(Now.ToString & "--Got messages from oracle", 1) If ProcessMessages() Then c.ErrorLog(Now.ToString & "--Finished Processing all messages in the queue", 0) status = True Else c.ErrorLog(Now.ToString & "--Failed to Process all messages in the queue", 0) status = False End If Else status = True End If StartDataXfer = status Catch ex As Exception Finally SqlConn.Close() OraConn.Close() End Try End Function Private Function GetMsgsFromOracle() As Boolean Try OraDataAdapter = New OleDb.OleDbDataAdapter OraDataTable = New System.Data.DataTable OraSelCmd = New OleDb.OleDbCommand GetMsgsFromOracle = False With OraSelCmd .CommandType = CommandType.Text .Connection = OraConn .CommandText = GetMsgSql End With OraDataAdapter.SelectCommand = OraSelCmd OraDataAdapter.Fill(OraDataTable) If OraDataTable.Rows.Count > 0 Then GetMsgsFromOracle = True End If Catch ex As Exception GetMsgsFromOracle = False End Try End Function Private Function ProcessMessages() As Boolean Try ProcessMessages = False PrepareSQLInsert() PrepOraDel() i = 0 Dim Method As Integer Dim OraDataRow As DataRow c.ErrorLog(Now.ToString & "--Going to call message sending procedure", 2) For Each OraDataRow In OraDataTable.Rows With OraDataRow Method = GetMethod(.Item(0)) SQLInsCmd.Parameters("RelLifeTime").Value = c.RelLifetime SQLInsCmd.Parameters("Param1").Value = Nothing SQLInsCmd.Parameters("ID").Value = GenerateTransactionID() ' Nothing SQLInsCmd.Parameters("UID").Value = Nothing SQLInsCmd.Parameters("Param").Value = Nothing SQLInsCmd.Parameters("Credit").Value = 0 SQLInsCmd.ExecuteNonQuery() 'check the return value If SQLInsCmd.Parameters("ReturnValue").Value = 1 And SQLInsCmd.Parameters("OutPutParam").Value = 0 Then 'success 'delete the input record from the source table once it is logged c.ErrorLog(Now.ToString & "--Moved record successfully", 2) OraDataAdapter.DeleteCommand.Parameters("P(0)").Value = OraDataRow.Item(6) OraDataAdapter.DeleteCommand.ExecuteNonQuery() c.ErrorLog(Now.ToString & "--Deleted record successfully", 2) OraDataAdapter.Update(OraDataTable) c.ErrorLog(Now.ToString & "--Committed record successfully", 2) i = i + 1 Else 'failure c.ErrorLog(Now.ToString & "--Failed to exec: " & c.DestIns & "Status: " & SQLInsCmd.Parameters("OutPutParam").Value & " and TrackId: " & SQLInsCmd.Parameters("TrackID").Value.ToString, 0) End If If File.Exists("stop.txt") Then c.ErrorLog(Now.ToString & "--Stop File Found", 1) 'ProcessMessages = True 'Exit Function Exit For End If End With Next OraDataAdapter.Update(OraDataTable) c.ErrorLog(Now.ToString & "--Updated Oracle Table", 1) c.ErrorLog(Now.ToString & "--Moved " & i & " records from Oracle to SQL Table", 1) ProcessMessages = True Catch ex As Exception ProcessMessages = False c.ErrorLog(Now.ToString & "--MoveMsgsToSQL: " & ex.Message, 0) Finally OraDataTable.Clear() OraDataTable.Dispose() OraDataAdapter.Dispose() OraDelCmd.Dispose() OraDelCmd = Nothing OraSelCmd = Nothing OraDataTable = Nothing OraDataAdapter = Nothing End Try End Function Public Function GenerateTransactionID() As Int64 Dim SeqNo As Int64 Dim qry As String Dim SqlTransCmd As New OleDb.OleDbCommand qry = " select seqno from StoreSeqNo" SqlTransCmd.CommandType = CommandType.Text SqlTransCmd.Connection = SqlConn SqlTransCmd.CommandText = qry SeqNo = SqlTransCmd.ExecuteScalar If SeqNo > 2147483647 Then qry = "update StoreSeqNo set seqno=1" SqlTransCmd.CommandText = qry SqlTransCmd.ExecuteNonQuery() GenerateTransactionID = 1 Else qry = "update StoreSeqNo set seqno=" & SeqNo + 1 SqlTransCmd.CommandText = qry SqlTransCmd.ExecuteNonQuery() GenerateTransactionID = SeqNo End If End Function Private Function PrepareSQLInsert() As Boolean 'function to prepare the insert statement for the insert into the SQL stmt using 'the sql procedure SMSProcessAndDispatch Try Dim dr As DataRow SQLInsCmd = New OleDb.OleDbCommand With SQLInsCmd .CommandType = CommandType.StoredProcedure .Connection = SqlConn .CommandText = SQLInsProc .Parameters.Add("ReturnValue", OleDb.OleDbType.Integer) .Parameters("ReturnValue").Direction = ParameterDirection.ReturnValue .Parameters.Add("OutPutParam", OleDb.OleDbType.Integer) .Parameters("OutPutParam").Direction = ParameterDirection.Output .Parameters.Add("TrackID", OleDb.OleDbType.VarChar, 70) .Parameters.Add("RelLifeTime", OleDb.OleDbType.TinyInt) .Parameters("RelLifeTime").Direction = ParameterDirection.Input .Parameters.Add("Param1", OleDb.OleDbType.VarChar, 160) .Parameters("Param1").Direction = ParameterDirection.Input .Parameters.Add("TransID", OleDb.OleDbType.VarChar, 70) .Parameters("TransID").Direction = ParameterDirection.Input .Parameters.Add("UID", OleDb.OleDbType.VarChar, 20) .Parameters("UID").Direction = ParameterDirection.Input .Parameters.Add("Param", OleDb.OleDbType.VarChar, 160) .Parameters("Param").Direction = ParameterDirection.Input .Parameters.Add("CheckCredit", OleDb.OleDbType.Integer) .Parameters("CheckCredit").Direction = ParameterDirection.Input .Prepare() End With Catch ex As Exception c.ErrorLog(Now.ToString & "--PrepareSQLInsert: " & ex.Message) End Try End Function Private Function PrepOraDel() As Boolean OraDelCmd = New OleDb.OleDbCommand Try PrepOraDel = False With OraDelCmd .CommandType = CommandType.Text .Connection = OraConn .CommandText = DelSrcSQL .Parameters.Add("P(0)", OleDb.OleDbType.VarChar, 160) 'RowID .Parameters("P(0)").Direction = ParameterDirection.Input .Prepare() End With OraDataAdapter.DeleteCommand = OraDelCmd PrepOraDel = True Catch ex As Exception PrepOraDel = False End Try End Function WHat i would like to know is, if there is anyway to speed up this program? Any ideas/suggestions would be highly appreciated... Regardss, Chetan

    Read the article

  • Fastest way to move records from an Oracle database into SQL Server

    - by user347748
    Ok this is the scenario... I have a table in Oracle that acts like a queue... A VB.net program reads the queue and calls a stored proc in SQL Server that processes and then inserts the message into another SQL Server table and then deletes the record from the oracle table. We use a DataReader to read the records from Oracle and then call the stored proc for each of the records. The program seems to be a little slow. The stored procedure itself isn't slow. The SP by itself when called in a loop can process about 2000 records in 20 seconds. But when called from the .Net program, the execution time is about 5 records per second. I have seen that most of the time consumed is in calling the stored procedure and waiting for it to return. Is there a better way of doing this? Here is a snippet of the actual code Function StartDataXfer() As Boolean Dim status As Boolean = False Try SqlConn.Open() OraConn.Open() c.ErrorLog(Now.ToString & "--Going to Get the messages from oracle", 1) If GetMsgsFromOracle() Then c.ErrorLog(Now.ToString & "--Got messages from oracle", 1) If ProcessMessages() Then c.ErrorLog(Now.ToString & "--Finished Processing all messages in the queue", 0) status = True Else c.ErrorLog(Now.ToString & "--Failed to Process all messages in the queue", 0) status = False End If Else status = True End If StartDataXfer = status Catch ex As Exception Finally SqlConn.Close() OraConn.Close() End Try End Function Private Function GetMsgsFromOracle() As Boolean Try OraDataAdapter = New OleDb.OleDbDataAdapter OraDataTable = New System.Data.DataTable OraSelCmd = New OleDb.OleDbCommand GetMsgsFromOracle = False With OraSelCmd .CommandType = CommandType.Text .Connection = OraConn .CommandText = GetMsgSql End With OraDataAdapter.SelectCommand = OraSelCmd OraDataAdapter.Fill(OraDataTable) If OraDataTable.Rows.Count > 0 Then GetMsgsFromOracle = True End If Catch ex As Exception GetMsgsFromOracle = False End Try End Function Private Function ProcessMessages() As Boolean Try ProcessMessages = False PrepareSQLInsert() PrepOraDel() i = 0 Dim Method As Integer Dim OraDataRow As DataRow c.ErrorLog(Now.ToString & "--Going to call message sending procedure", 2) For Each OraDataRow In OraDataTable.Rows With OraDataRow Method = GetMethod(.Item(0)) SQLInsCmd.Parameters("RelLifeTime").Value = c.RelLifetime SQLInsCmd.Parameters("Param1").Value = Nothing SQLInsCmd.Parameters("ID").Value = GenerateTransactionID() ' Nothing SQLInsCmd.Parameters("UID").Value = Nothing SQLInsCmd.Parameters("Param").Value = Nothing SQLInsCmd.Parameters("Credit").Value = 0 SQLInsCmd.ExecuteNonQuery() 'check the return value If SQLInsCmd.Parameters("ReturnValue").Value = 1 And SQLInsCmd.Parameters("OutPutParam").Value = 0 Then 'success 'delete the input record from the source table once it is logged c.ErrorLog(Now.ToString & "--Moved record successfully", 2) OraDataAdapter.DeleteCommand.Parameters("P(0)").Value = OraDataRow.Item(6) OraDataAdapter.DeleteCommand.ExecuteNonQuery() c.ErrorLog(Now.ToString & "--Deleted record successfully", 2) OraDataAdapter.Update(OraDataTable) c.ErrorLog(Now.ToString & "--Committed record successfully", 2) i = i + 1 Else 'failure c.ErrorLog(Now.ToString & "--Failed to exec: " & c.DestIns & "Status: " & SQLInsCmd.Parameters("OutPutParam").Value & " and TrackId: " & SQLInsCmd.Parameters("TrackID").Value.ToString, 0) End If If File.Exists("stop.txt") Then c.ErrorLog(Now.ToString & "--Stop File Found", 1) 'ProcessMessages = True 'Exit Function Exit For End If End With Next OraDataAdapter.Update(OraDataTable) c.ErrorLog(Now.ToString & "--Updated Oracle Table", 1) c.ErrorLog(Now.ToString & "--Moved " & i & " records from Oracle to SQL Table", 1) ProcessMessages = True Catch ex As Exception ProcessMessages = False c.ErrorLog(Now.ToString & "--MoveMsgsToSQL: " & ex.Message, 0) Finally OraDataTable.Clear() OraDataTable.Dispose() OraDataAdapter.Dispose() OraDelCmd.Dispose() OraDelCmd = Nothing OraSelCmd = Nothing OraDataTable = Nothing OraDataAdapter = Nothing End Try End Function Public Function GenerateTransactionID() As Int64 Dim SeqNo As Int64 Dim qry As String Dim SqlTransCmd As New OleDb.OleDbCommand qry = " select seqno from StoreSeqNo" SqlTransCmd.CommandType = CommandType.Text SqlTransCmd.Connection = SqlConn SqlTransCmd.CommandText = qry SeqNo = SqlTransCmd.ExecuteScalar If SeqNo > 2147483647 Then qry = "update StoreSeqNo set seqno=1" SqlTransCmd.CommandText = qry SqlTransCmd.ExecuteNonQuery() GenerateTransactionID = 1 Else qry = "update StoreSeqNo set seqno=" & SeqNo + 1 SqlTransCmd.CommandText = qry SqlTransCmd.ExecuteNonQuery() GenerateTransactionID = SeqNo End If End Function Private Function PrepareSQLInsert() As Boolean 'function to prepare the insert statement for the insert into the SQL stmt using 'the sql procedure SMSProcessAndDispatch Try Dim dr As DataRow SQLInsCmd = New OleDb.OleDbCommand With SQLInsCmd .CommandType = CommandType.StoredProcedure .Connection = SqlConn .CommandText = SQLInsProc .Parameters.Add("ReturnValue", OleDb.OleDbType.Integer) .Parameters("ReturnValue").Direction = ParameterDirection.ReturnValue .Parameters.Add("OutPutParam", OleDb.OleDbType.Integer) .Parameters("OutPutParam").Direction = ParameterDirection.Output .Parameters.Add("TrackID", OleDb.OleDbType.VarChar, 70) .Parameters.Add("RelLifeTime", OleDb.OleDbType.TinyInt) .Parameters("RelLifeTime").Direction = ParameterDirection.Input .Parameters.Add("Param1", OleDb.OleDbType.VarChar, 160) .Parameters("Param1").Direction = ParameterDirection.Input .Parameters.Add("TransID", OleDb.OleDbType.VarChar, 70) .Parameters("TransID").Direction = ParameterDirection.Input .Parameters.Add("UID", OleDb.OleDbType.VarChar, 20) .Parameters("UID").Direction = ParameterDirection.Input .Parameters.Add("Param", OleDb.OleDbType.VarChar, 160) .Parameters("Param").Direction = ParameterDirection.Input .Parameters.Add("CheckCredit", OleDb.OleDbType.Integer) .Parameters("CheckCredit").Direction = ParameterDirection.Input .Prepare() End With Catch ex As Exception c.ErrorLog(Now.ToString & "--PrepareSQLInsert: " & ex.Message) End Try End Function Private Function PrepOraDel() As Boolean OraDelCmd = New OleDb.OleDbCommand Try PrepOraDel = False With OraDelCmd .CommandType = CommandType.Text .Connection = OraConn .CommandText = DelSrcSQL .Parameters.Add("P(0)", OleDb.OleDbType.VarChar, 160) 'RowID .Parameters("P(0)").Direction = ParameterDirection.Input .Prepare() End With OraDataAdapter.DeleteCommand = OraDelCmd PrepOraDel = True Catch ex As Exception PrepOraDel = False End Try End Function WHat i would like to know is, if there is anyway to speed up this program? Any ideas/suggestions would be highly appreciated... Regardss, Chetan

    Read the article

  • SQL SERVER – What is AdventureWorks?

    - by pinaldave
    NOTE: If you know the answer of this question, then I request you to stop reading this post right now. Please do not leave comment about this blog post not being useful to you, if you knew the answer. Few days ago, I received DM asking What is an AdventureWorks database and why in all the examples I use that instead of any other database (e.g. Pubs or  Northwind)? As matter of fact, when I went back to my question list, which I have yet not answered, there were a few more variations of this same question. AdventureWorks is a Sample Database shipped with SQL Server and it can be downloaded from http://codeplex.com site. AdventureWorks has replaced Northwind and Pubs from the sample database in SQL Server 2005. The Microsoft team keeps updating the sample database as they release new versions. Here are some quick links: AdventureWorks SQL Server 2008 SR4 AdventureWorks 2008R2 November CTP AdventureWorks for SQL Azure (December CTP) AventureWorks for SQL Server 2005 SP2A SQL SERVER – 2008 – Download and Install Samples Database AdventureWorks 2005 – Detail Tutorial I have previously written few other articles on the same subject; you can find them easily here: [email protected] Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News – Microsoft SQL Server 2008 R2 – PowerPivot for Microsoft Excel 2010

    - by pinaldave
    Microsoft has really and truly created some buzz for PowerPivot. I have been asked to show the demo of Powerpivot in recent time even when I am doing relational database training. Attached is the few details where everyone can download PowerPivot and use the same. Microsoft SQL Server 2008 R2 – PowerPivot for Microsoft Excel 2010 – RTM Microsoft® PowerPivot for Microsoft® Excel 2010 provides ground-breaking technology, such as fast manipulation of large data sets (often millions of rows), streamlined integration of data, and the ability to effortlessly share your analysis through Microsoft® SharePoint 2010. Microsoft PowerPivot for Excel 2010 Samples Microsoft® PowerPivot for Microsoft® Excel 2010 provides ground-breaking technology, such as fast manipulation of large data sets (often millions of rows), streamlined integration of data, and the ability to effortlessly share your analysis through Microsoft® SharePoint 2010. Download examples of the types of reports you can create. Microsoft PowerPivot for Excel 2010 Data Analysis Expressions Sample version 1.0 Microsoft® PowerPivot for Microsoft® Excel 2010 provides ground-breaking technology, such as fast manipulation of large data sets (often millions of rows), streamlined integration of data, and the ability to effortlessly share your analysis through Microsoft® SharePoint 2010. Download this PowerPivot workbook to learn more about DAX calculations. Note: The brief description below the download link is taken from respective download page. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Is Social Media The Vital Skill You Aren’t Tracking?

    - by HCM-Oracle
    By Mark Bennett - Originally featured in Talent Management Excellence The ever-increasing presence of the workforce on social media presents opportunities as well as risks for organizations. While on the one hand, we read about social media embarrassments happening to organizations, on the other we see that social media activities by workers and candidates can enhance a company’s brand and provide insight into what individuals are, or can become, influencers in the social media sphere. HR can play a key role in helping organizations make the most value out of the activities and presence of workers and candidates, while at the same time also helping to manage the risks that come with the permanence and viral nature of social media. What is Missing from Understanding Our Workforce? “If only HP knew what HP knows, we would be three-times more productive.”  Lew Platt, Former Chairman, President, CEO, Hewlett-Packard  What Lew Platt recognized was that organizations only have a partial understanding of what their workforce is capable of. This lack of understanding impacts the company in several negative ways: 1. A particular skill that the company needs to access in one part of the organization might exist somewhere else, but there is no record that the skill exists, so the need is unfulfilled. 2. As market conditions change rapidly, the company needs to know strategic options, but some options are missed entirely because the company doesn’t know that sufficient capability already exists to enable those options. 3. Employees may miss out on opportunities to demonstrate how their hidden skills could create new value to the company. Why don’t companies have that more complete picture of their workforce capabilities – that is, not know what they know? One very good explanation is that companies put most of their efforts into rating their workforce according to the jobs and roles they are filling today. This is the essence of two important talent management processes: recruiting and performance appraisals.  In recruiting, a set of requirements is put together for a job, either explicitly or indirectly through a job description. During the recruiting process, much of the attention is paid towards whether the candidate has the qualifications, the skills, the experience and the cultural fit to be successful in the role. This makes a lot of sense.  In the performance appraisal process, an employee is measured on how well they performed the functions of their role and in an effort to help the employee do even better next time, they are also measured on proficiency in the competencies that are deemed to be key in doing that job. Again, the logic is impeccable.  But in both these cases, two adages come to mind: 1. What gets measured is what gets managed. 2. You only see what you are looking for. In other words, the fact that the current roles the workforce are performing are the basis for measuring which capabilities the workforce has, makes them the only capabilities to be measured. What was initially meant to be a positive, i.e. identify what is needed to perform well and measure it, in order that it can be managed, comes with the unintended negative consequence of overshadowing the other capabilities the workforce has. This also comes with an employee engagement price, for the measurements and management of workforce capabilities is to typically focus on where the workforce comes up short. Again, it makes sense to do this, since improving a capability that appears to result in improved performance benefits, both the individual through improved performance ratings and the company through improved productivity. But this is based on the assumption that the capabilities identified and their required proficiencies are the only attributes of the individual that matter. Anything else the individual brings that results in high performance, while resulting in a desired performance outcome, often goes unrecognized or underappreciated at best. As social media begins to occupy a more important part in current and future roles in organizations, businesses must incorporate social media savvy and innovation into job descriptions and expectations. These new measures could provide insight into how well someone can use social media tools to influence communities and decision makers; keep abreast of trends in fast-moving industries; present a positive brand image for the organization around thought leadership, customer focus, social responsibility; and coordinate and collaborate with partners. These measures should demonstrate the “social capital” the individual has invested in and developed over time. Without this dimension, “short cut” methods may generate a narrow set of positive metrics that do not have real, long-lasting benefits to the organization. How Workforce Reputation Management Helps HR Harness Social Media With hundreds of petabytes of social media data flowing across Facebook, LinkedIn and Twitter, businesses are tapping technology solutions to effectively leverage social for HR. Workforce reputation management technology helps organizations discover, mobilize and retain talent by providing insight into the social reputation and influence of the workforce while also helping organizations monitor employee social media policy compliance and mitigate social media risk.  There are three major ways that workforce reputation management technology can play a strategic role to support HR: 1. Improve Awareness and Decisions on Talent Many organizations measure the skills and competencies that they know they need today, but are unaware of what other skills and competencies their workforce has that could be essential tomorrow. How about whether your workforce has the reputation and influence to make their skills and competencies more effective? Many organizations don’t have insight into the social media “reach” their workforce has, which is becoming more critical to business performance. These features help organizations, managers, and employees improve many talent processes and decision making, including the following: Hiring and Assignments. People and teams with higher reputations are considered more valuable and effective workers. Someone with high reputation who refers a candidate also can have high credibility as a source for hires.   Training and Development. Reputation trend analysis can impact program decisions regarding training offerings by showing how reputation and influence across the workforce changes in concert with training. Worker reputation impacts development plans and goal choices by helping the individual see which development efforts result in improved reputation and influence.   Finding Hidden Talent. Managers can discover hidden talent and skills amongst employees based on a combination of social profile information and social media reputation. Employees can improve their personal brand and accelerate their career development.  2. Talent Search and Discovery The right technology helps organizations find information on people that might otherwise be hidden. By leveraging access to candidate and worker social profiles as well as their social relationships, workforce reputation management provides companies with a more complete picture of what their knowledge, skills, and attributes are and what they can in turn access. This more complete information helps to find the right talent both outside the organization as well as the right, perhaps previously hidden talent, within the organization to fill roles and staff projects, particularly those roles and projects that are required in reaction to fast-changing opportunities and circumstances. 3. Reputation Brings Credibility Workforce reputation management technology provides a clearer picture of how candidates and workers are viewed by their peers and communities across a wide range of social reputation and influence metrics. This information is less subject to individual bias and can impact critical decision-making. Knowing the individual’s reputation and influence enables the organization to predict how well their capabilities and behaviors will have a positive effect on desired business outcomes. Many roles that have the highest impact on overall business performance are dependent on the individual’s influence and reputation. In addition, reputation and influence measures offer a very tangible source of feedback for workers, providing them with insight that helps them develop themselves and their careers and see the effectiveness of those efforts by tracking changes over time in their reputation and influence. The following are some examples of the different reputation and influence measures of the workforce that Workforce Reputation Management could gather and analyze: Generosity – How often the user reposts other’s posts. Influence – How often the user’s material is reposted by others.  Engagement – The ratio of recent posts with references (e.g. links to other posts) to the total number of posts.  Activity – How frequently the user posts. (e.g. number per day)  Impact – The size of the users’ social networks, which indicates their ability to reach unique followers, friends, or users.   Clout – The number of references and citations of the user’s material in others’ posts.  The Vital Ingredient of Workforce Reputation Management: Employee Participation “Nothing about me, without me.” Valerie Billingham, “Through the Patient’s Eyes”, Salzburg Seminar Session 356, 1998 Since data resides primarily in social media, a question arises: what manner is used to collect that data? While much of social media activity is publicly accessible (as many who wished otherwise have learned to their chagrin), the social norms of social media have developed to put some restrictions on what is acceptable behavior and by whom. Disregarding these norms risks a repercussion firestorm. One of the more recognized norms is that while individuals can follow and engage with other individual’s public social activity (e.g. Twitter updates) fairly freely, the more an organization does this unprompted and without getting permission from the individual beforehand, the more likely the organization risks a totally opposite outcome from the one desired. Instead, the organization must look for permission from the individual, which can be met with resistance. That resistance comes from not knowing how the information will be used, how it will be shared with others, and not receiving enough benefit in return for granting permission. As the quote above about patient concerns and rights succinctly states, no one likes not feeling in control of the information about themselves, or the uncertainty about where it will be used. This is well understood in consumer social media (i.e. permission-based marketing) and is applicable to workforce reputation management. However, asking permission leaves open the very real possibility that no one, or so few, will grant permission, resulting in a small set of data with little usefulness for the company. Connecting Individual Motivation to Organization Needs So what is it that makes an individual decide to grant an organization access to the data it wants? It is when the individual’s own motivations are in alignment with the organization’s objectives. In the case of workforce reputation management, when the individual is motivated by a desire for increased visibility and career growth opportunities to advertise their skills and level of influence and reputation, they are aligned with the organizations’ objectives; to fill resource needs or strategically build better awareness of what skills are present in the workforce, as well as levels of influence and reputation. Individuals can see the benefit of granting access permission to the company through multiple means. One is through simple social awareness; they begin to discover that peers who are getting more career opportunities are those who are signed up for workforce reputation management. Another is where companies take the message directly to the individual; we think you would benefit from signing up with our workforce reputation management solution. Another, more strategic approach is to make reputation management part of a larger Career Development effort by the company; providing a wide set of tools to help the workforce find ways to plan and take action to achieve their career aspirations in the organization. An effective mechanism, that facilitates connecting the visibility and career growth motivations of the workforce with the larger context of the organization’s business objectives, is to use game mechanics to help individuals transform their career goals into concrete, actionable steps, such as signing up for reputation management. This works in favor of companies looking to use workforce reputation because the workforce is more apt to see how it fits into achieving their overall career goals, as well as seeing how other participation brings additional benefits.  Once an individual has signed up with reputation management, not only have they made themselves more visible within the organization and increased their career growth opportunities, they have also enabled a tool that they can use to better understand how their actions and behaviors impact their influence and reputation. Since they will be able to see their reputation and influence measurements change over time, they will gain better insight into how reputation and influence impacts their effectiveness in a role, as well as how their behaviors and skill levels in turn affect their influence and reputation. This insight can trigger much more directed, and effective, efforts by the individual to improve their ability to perform at a higher level and become more productive. The increased sense of autonomy the individual experiences, in linking the insight they gain to the actions and behavior changes they make, greatly enhances their engagement with their role as well as their career prospects within the company. Workforce reputation management takes the wide range of disparate data about the workforce being produced across various social media platforms and transforms it into accessible, relevant, and actionable information that helps the organization achieve its desired business objectives. Social media holds untapped insights about your talent, brand and business, and workforce reputation management can help unlock them. Imagine - if you could find the hidden secrets of your businesses, how much more productive and efficient would your organization be? Mark Bennett is a Director of Product Strategy at Oracle. Mark focuses on setting the strategic vision and direction for tools that help organizations understand, shape, and leverage the capabilities of their workforce to achieve business objectives, as well as help individuals work effectively to achieve their goals and navigate their own growth. His combination of a deep technical background in software design and development, coupled with a broad knowledge of business challenges and thinking in today’s globalized, rapidly changing, technology accelerated economy, has enabled him to identify and incorporate key innovations that are central to Oracle Fusion’s unique value proposition. Mark has over the course of his career been in charge of the design, development, and strategy of Talent Management products and the design and development of cutting edge software that is better equipped to handle the increasingly complex demands of users while also remaining easy to use. Follow him @mpbennett

    Read the article

  • Best Practices - updated: which domain types should be used to run applications

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (formerly named Logical Domains). This is an updated and enlarged version of the post on this topic originally posted October 2012. One frequent question "what type of domain should I use to run applications?" There used to be a simple answer: "run applications in guest domains in almost all cases", but now there are more things to consider. Enhancements to Oracle VM Server for SPARC and introduction of systems like the current SPARC servers including the T4 and T5 systems, the Oracle SuperCluster T5-8 and Oracle SuperCluster M6-32 provide scale and performance much higher than the original servers that ran domains. Single-CPU performance, I/O capacity, memory sizes, are much larger now, and far more demanding applications are now being hosted in logical domains. The general advice continues to be "use guest domains in almost all cases", meaning, "use virtual I/O rather than physical I/O", unless there is a specific reason to use the other domain types. The sections below will discuss the criteria for choosing between domain types. Review: division of labor and types of domain Oracle VM Server for SPARC offloads management and I/O functionality from the hypervisor to domains (also called virtual machines), providing a modern alternative to older VM architectures that use a "thick", monolithic hypervisor. This permits a simpler hypervisor design, which enhances reliability, and security. It also reduces single points of failure by assigning responsibilities to multiple system components, further improving reliability and security. Oracle VM Server for SPARC defines the following types of domain, each with their own roles: Control domain - management control point for the server, runs the logical domain daemon and constraints engine, and is used to configure domains and manage resources. The control domain is the first domain to boot on a power-up, is always an I/O domain, and is usually a service domain as well. It doesn't have to be, but there's no reason to not leverage it for virtual I/O services. There is one control domain per T-series system, and one per Physical Domain (PDom) on an M5-32 or M6-32 system. M5 and M6 systems can be physically domained, with logical domains within the physical ones. I/O domain - a domain that has been assigned physical I/O devices. The devices may be one more more PCIe root complexes (in which case the domain is also called a root complex domain). The domain has native access to all the devices on the assigned PCIe buses. The devices can be any device type supported by Solaris on the hardware platform. a SR-IOV (Single-Root I/O Virtualization) function. SR-IOV lets a physical device (also called a physical function) or PF) be subdivided into multiple virtual functions (VFs) which can be individually assigned directly to domains. SR-IOV devices currently can be Ethernet or InfiniBand devices. direct I/O ownership of one or more PCI devices residing in a PCIe bus slot. The domain has direct access to the individual devices An I/O domain has native performance and functionality for the devices it owns, unmediated by any virtualization layer. It may also have virtual devices. Service domain - a domain that provides virtual network and disk devices to guest domains. The services are defined by commands that are run in the control domain. It usually is an I/O domain as well, in order for it to have devices to virtualize and serve out. Guest domain - a domain whose devices are all virtual rather than physical: virtual network and disk devices provided by one or more service domains. In common practice, this is where applications are run. Device considerations Consider the following when choosing between virtual devices and physical devices: Virtual devices provide the best flexibility - they can be dynamically added to and removed from a running domain, and you can have a large number of them up to a per-domain device limit. Virtual devices are compatible with live migration - domains that exclusively have virtual devices can be live migrated between servers supporting domains. On the other hand: Physical devices provide the best performance - in fact, native "bare metal" performance. Virtual devices approach physical device throughput and latency, especially with virtual network devices that can now saturate 10GbE links, but physical devices are still faster. Physical I/O devices do not add load to service domains - all the I/O goes directly from the I/O domain to the device, while virtual I/O goes through service domains, which must be provided sufficient CPU and memory capacity. Physical I/O devices can be other than network and disk - we virtualize network, disk, and serial console, but physical devices can be the wide range of attachable certified devices, including things like tape and CDROM/DVD devices. In some cases the lines are now blurred: virtual devices have better performance than previously: starting with Oracle VM Server for SPARC 3.1 there is near-native virtual network performance. There is more flexibility with physical devices than before: SR-IOV devices can now be dynamically reconfigured on domains. Tradeoffs one used to have to make are now relaxed: you can often have the flexibility of virtual I/O with performance that previously required physical I/O. You can have the performance and isolation of SR-IOV with the ability to dynamically reconfigure it, just like with virtual devices. Typical deployment A service domain is generally also an I/O domain: otherwise it wouldn't have access to physical device "backends" to offer to its clients. Similarly, an I/O domain is also typically a service domain in order to leverage the available PCI buses. Control domains must be I/O domains, because they boot up first on the server and require physical I/O. It's typical for the control domain to also be a service domain too so it doesn't "waste" the I/O resources it uses. A simple configuration consists of a control domain that is also the one I/O and service domain, and some number of guest domains using virtual I/O. In production, customers typically use multiple domains with I/O and service roles to eliminate single points of failure, as described in Availability Best Practices - Avoiding Single Points of Failure . Guest domains have virtual disk and virtual devices provisioned from more than one service domain, so failure of a service domain or I/O path or device does not result in an application outage. This also permits "rolling upgrades" in which service domains are upgraded one at a time while their guests continue to operate without disruption. (It should be noted that resiliency to I/O device failures can also be provided by the single control domain, using multi-path I/O) In this type of deployment, control, I/O, and service domains are used for virtualization infrastructure, while applications run in guest domains. Changing application deployment patterns The above model has been widely and successfully used, but more configuration options are available now. Servers got bigger than the original T2000 class machines with 2 I/O buses, so there is more I/O capacity that can be used for applications. Increased server capacity made it attractive to run more vertically-scaled applications, such as databases, with higher resource requirements than the "light" applications originally seen. This made it attractive to run applications in I/O domains so they could get bare-metal native I/O performance. This is leveraged by the Oracle SuperCluster engineered systems mentioned previously. In those engineered systems, I/O domains are used for high performance applications with native I/O performance for disk and network and optimized access to the Infiniband fabric. Another technical enhancement is Single Root I/O Virtualization (SR-IOV), which make it possible to give domains direct connections and native I/O performance for selected I/O devices. Not all I/O domains own PCI complexes, and there are increasingly more I/O domains that are not service domains. They use their I/O connectivity for performance for their own applications. However, there are some limitations and considerations: at this time, a domain using physical I/O cannot be live-migrated to another server. There is also a need to plan for security and introducing unneeded dependencies: if an I/O domain is also a service domain providing virtual I/O to guests, it has the ability to affect the correct operation of its client guest domains. This is even more relevant for the control domain. where the ldm command must be protected from unauthorized (or even mistaken) use that would affect other domains. As a general rule, running applications in the service domain or the control domain should be avoided. For reference, an excellent guide to secure deployment of domains by Stefan Hinker is at Secure Deployment of Oracle VM Server for SPARC. To recap: Guest domains with virtual I/O still provide the greatest operational flexibility, including features like live migration. They should be considered the default domain type to use unless there is a specific requirement that mandates an I/O domain. I/O domains can be used for applications with the highest performance requirements. Single Root I/O Virtualization (SR-IOV) makes this more attractive by giving direct I/O access to more domains, and by permitting dynamic reconfiguration of SR-IOV devices. Today's larger systems provide multiple PCIe buses - for example, 16 buses on the T5-8 - making it possible to configure multiple I/O domains each owning their own bus. Service domains should in general not be used for applications, because compromised security in the domain, or an outage, can affect domains that depend on it. This concern can be mitigated by providing guests' their virtual I/O from more than one service domain, so interruption of service in one service domain does not cause an application outage. The control domain should in general not be used to run applications, for the same reason. Oracle SuperCluster uses the control domain for applications, but it is an exception. It's not a general purpose environment; it's an engineered system with specifically configured applications and optimization for optimal performance. These are recommended "best practices" based on conversations with a number of Oracle architects. Keep in mind that "one size does not fit all", so you should evaluate these practices in the context of your own requirements. Summary Higher capacity servers that run Oracle VM Server for SPARC are attractive for applications with the most demanding resource requirements. New deployment models permit native I/O performance for demanding applications by running them in I/O domains with direct access to their devices. This is leveraged in SPARC SuperCluster, and can be leveraged in T-series servers to provision high-performance applications running in domains. Carefully planned, this can be used to provide peak performance for critical applications. That said, the improved virtual device performance in Oracle VM Server means that the default choice should still be guest domains with virtual I/O.

    Read the article

  • SQL SERVER – FIX: ERROR Msg 5169, Level 16: FILEGROWTH cannot be greater than MAXSIZE for file

    - by pinaldave
    I am writing this blog post right after I resolve this error for one of the system. Recently one of the my friend who is expert in infrastructure as well private cloud was working on SQL Server installation. Please note he is seriously expert in what he does but he has never worked SQL Server before and have absolutely no experience with its installation. He was modifying database file and keep on getting following error. As soon as he saw me he asked me where is the maxfile size setting so he can change. Let us quickly re-create the scenario he was facing. Error Message: Msg 5169, Level 16, State 1, Line 1 FILEGROWTH cannot be greater than MAXSIZE for file ‘NewDB’. Creating Scenario: CREATE DATABASE [NewDB] ON PRIMARY (NAME = N'NewDB', FILENAME = N'D:\NewDB.mdf' , SIZE = 4096KB, FILEGROWTH = 1024KB, MAXSIZE = 4096KB) LOG ON (NAME = N'NewDB_log', FILENAME = N'D:\NewDB_log.ldf', SIZE = 1024KB, FILEGROWTH = 10%) GO Now let us see what exact command was creating error for him. USE [master] GO ALTER DATABASE [NewDB] MODIFY FILE ( NAME = N'NewDB', FILEGROWTH = 1024MB ) GO Workaround / Fix / Solution: The reason for the error is very simple. He was trying to modify the filegrowth to much higher value than the maximum file size specified for the database. There are two way we can fix it. Method 1: Reduces the filegrowth to lower value than maxsize of file USE [master] GO ALTER DATABASE [NewDB] MODIFY FILE ( NAME = N'NewDB', FILEGROWTH = 1024KB ) GO Method 2: Increase maxsize of file so it is greater than new filegrowth USE [master] GO ALTER DATABASE [NewDB] MODIFY FILE ( NAME = N'NewDB', FILEGROWTH = 1024MB, MAXSIZE = 4096MB) GO I think this blog post will help everybody who is facing similar issues. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >