Search Results

Search found 55437 results on 2218 pages for 'oracle berkeley db java edition'.

Page 706/2218 | < Previous Page | 702 703 704 705 706 707 708 709 710 711 712 713  | Next Page >

  • structured vs. unstructured data in db

    - by Igor
    the question is one of design. i'm gathering a big chunk of performance data with lots of key-value pairs. pretty much everything in /proc/cpuinfo, /proc/meminfo/, /proc/loadavg, plus a bunch of other stuff, from several hundred hosts. right now, i just need to display the latest chunk of data in my UI. i will probably end up doing some analysis of the data gathered to figure out performance problems down the road, but this is a new application so i'm not sure what exactly i'm looking for performance-wise just yet. i could structure the data in the db -- have a column for each key i'm gathering. the table would end up being O(100) columns wide, it would be a pain to put into the db, i would have to add new columns if i start gathering a new stat. but it would be easy to sort/analyze the data just using SQL. or i could just dump my unstructured data blob into the table. maybe three columns -- host id, timestamp, and a serialized version of my array, probably using JSON in a TEXT field. which should I do? am i going to be sorry if i go with the unstructured approach? when doing analysis, should i just convert the fields i'm interested in and create a new, more structured table? what are the trade-offs i'm missing here?

    Read the article

  • Problem inserting android.text.format.Time.toMillis value into SQLite DB on droid

    - by schusselig
    I'm writing an app for Android OS, and I need to store some time values in the SQLite DB. I have been using android.text.format.Time to store the time values in the app, and then inserting the values as millis into the DB as REAL values. On the SDK emulator, everything works perfectly. On the sole phone I've had the opportunity to test my app (so far), my duration code doesn't work as expected. Some relevant code: private static final String DATABASE_CREATE = "create table " + DATABASE_TABLE + " (" + KEY_ROWID + " integer primary key autoincrement, " + KEY_START + " REAL, " + KEY_STOP + " REAL, " + KEY_DUR + " REAL );"; ... private SQLiteDatabase mDb; ContentValues timerValues = new ContentValues(); ... timerValues.put(KEY_START, stime.toMillis(false)); timerValues.put(KEY_STOP, etime.toMillis(false)); timerValues.put(KEY_DURATION, stime.toMillis(false)-etime.toMillis(false)); int result = mDb.insert(DATABASE_TABLE, null, timerValues); I pull this data from two separate functions with slightly different bits of code, both using Time.set(long millis), both giving incorrect results: The start and stop values come back correct, but the duration comes out 17 hours too large. Am I missing something about calculating durations or does this just seem like there's something "special" about this particular droid? I'll have another droid to test on Monday, but any ideas are appreciated.

    Read the article

  • db:migrate creates sequences but doesn't alter table?

    - by RewbieNewbie
    Hello, I have a migration that creates a postres sequence for auto incrementing a primary identifier, and then executes a statement for altering the column and specifying the default value: execute 'CREATE SEQUENCE "ServiceAvailability_ID_seq";' execute <<-SQL ALTER TABLE "ServiceAvailability" ALTER COLUMN "ID" set DEFAULT NEXTVAL('ServiceAvailability_ID_seq'); SQL If I run db:migrate everything seems to work, in that no errors are returned, however, if I run the rails application I get: Mnull value in column "ID" violates not-null constraint I have discovered by executing the sql statement in the migration manually, that this error is because the alter statement isn't working, or isn't being executed. If I manually execute the following statement: CREATE SEQUENCE "ServiceAvailability_ID_seq; I get: error : ERROR: relation "serviceavailability_id_seq" already exists Which means the migration successfully created the sequence! However, if I manually run: ALTER TABLE "ServiceProvider" ALTER COLUMN "ID" set DEFAULT NEXTVAL('ServiceProvider_ID_seq'); SQL It runs successfully and creates the default NEXTVAL. So the question is, why is the migration file creating the sequence with the first execute statement, but not altering the table in the second execute? (Remembering, no errors are output on running db:migrate) Thank you and apologies for tl:dr

    Read the article

  • Django: Save data from form in DB

    - by Anry
    I have a model: class Cost(models.Model): project = models.ForeignKey(Project) cost = models.FloatField() date = models.DateField() For the model I created a class form: class CostForm(ModelForm): class Meta: model = Cost fields = ['date', 'cost'] view.py: def cost(request, offset): if request.method == 'POST': #HOW save data in DB? return HttpResponseRedirect('/') else: form = CostForm() In the template file determined: <form action="/cost/{{ project }}/" method="post" accept-charset="utf-8"> <label for="date">Date:</label><input type="text" name="date" value={{ current_date }} id="date" /> <label for="cost">Cost:</label><input type="text" name="cost" value="0" id="cost" /> <p><input type="submit" value="Add"></p> </form> How save data from form in DB? P.S. offset = project name Model: class Project(models.Model): title = models.CharField(max_length=150) url = models.URLField() manager = models.ForeignKey(User) timestamp = models.DateTimeField() I tried to write: def cost(request, offset): if request.method == 'POST': form = CostForm(request.POST) if form.is_valid(): instance = form.save(commit=False) instance.project = Project.objects.filter(title=offset) instance.date = request.date instance.cost = request.cost instance.save() return HttpResponseRedirect('/') else: form = CostForm() But it does not work :(

    Read the article

  • Django db encoding

    - by realshadow
    Hey, I have a little problem with encoding. The data in db is ok, when I select the data in php its ok. Problem comes when I get the data and try to print it in the template, I get - Å port instead of Šport, etc. Everything is set to utf-8 - in settings.py, meta tags in template, db table and I even have unicode method specified for the model, but nothing seems to work. I am getting pretty hopeless here... Here is some code: class Category_info(models.Model): objtree_label_id = models.AutoField(primary_key = True) node_id = models.IntegerField(unique = True) language_id = models.IntegerField() label = models.CharField(max_length = 255) type_id = models.IntegerField() class Meta: db_table = 'objtree_labels' def __unicode__(self): return self.label I have even tried with return u"%s" % self.label. Here is the view: def categories_list(request): categories_list = Category.objects.filter(parent_id = 1, status = 1) paginator = Paginator(categories_list, 10) try: page = int(request.GET.get('page', 1)) except ValueError: page = 1 try: categories = paginator.page(page) except (EmptyPage, InvalidPage): categories = paginator.page(paginator.num_pages) return render_to_response('categories_list.html', {'categories': categories}) Maybe I am just blind and/or stupid, but it just doesnt work. So any help is appreciated, thanks in advance. Regards

    Read the article

  • Handle multiple db updates from c# in SQL Server 2008

    - by joeriks
    I like to find a way to handle multiple updates to a sql db (with one singe db roundtrip). I read about table-valued parameters in SQL Server 2008 http://www.codeproject.com/KB/database/TableValueParameters.aspx which seems really useful. But it seems I need to create both a stored procedure and a table type to use it. Is that true? Perhaps due to security? I would like to run a text query simply like this: var sql = "INSERT INTO Note (UserId, note) SELECT * FROM @myDataTable"; var myDataTable = ... some System.Data.DataTable ... var cmd = new System.Data.SqlClient.SqlCommand(sql, conn); var param = cmd.Parameters.Add("@myDataTable", System.Data.SqlDbType.Structured); param.Value=myDataTable; cmd.ExecuteNonQuery(); So A) do I have to create both a stored procedure and a table type to use TVP's? and B) what alternative method is recommended to send multiple updates (and inserts) to SQL Server?

    Read the article

  • Web Shop Schema - Document Db

    - by Maxem
    I'd like to evaluate a document db, probably mongo db in an ASP.Net MVC web shop. A little reasoning at the beginning: There are about 2 million products. The product model would be pretty bad for rdbms as there'd be many different kinds of products with unique attributes. For example, there'd be books which have isbn, authors, title, pages etc as well as dvds with play time, directors, artists etc and quite a few more types. In the end, I'd have about 9 different products with a combined column count (counting common columns like title only once) of about 70 to 100 whereas each individual product has 15 columns at most. The three commonly used ways in RDBMS would be: EAV model which would have pretty bad performance characteristics and would make it either impractical or perform even worse if I'd like to display the author of a book in a list of different products (think start page, recommended products etc.). Ignore the column count and put it all in the product table: Although I deal with somewhat bigger databases (row wise), I don't have any experience with tables with more than 20 columns as far as performance is concered but I guess 100 columns would have some implications. Create a table for each product type: I personally don't like this approach as it complicates everything else. C# Driver / Classes: I'd like to use the NoRM driver and so far I think i'll try to create a product dto that contains all properties (grouped within detail classes like book details, except for those properties that should be displayed on list views etc.). In the app I'll use BookBehavior / DvdBehaviour which are wrappers around a product dto but only expose the revelent Properties. My questions now: Are my performance concerns with the many columns approach valid? Did I overlook something and there is a much better way to do it in an RDBMS? Is MongoDb on Windows stable enough? Does my approach with different behaviour wrappers make sense?

    Read the article

  • problem with insert into mysql DB using PHP

    - by user504363
    Hi all I have strange problem that I have a PHP page used to insert data into Mysql DB. the problem is that when I execute the code, nothing added to db and no errors is appeared although I set display errors codes error_reporting(E_ALL); ini_set('display_errors', TRUE); ini_set('display_startup_errors', TRUE); any idea about this problem ! here is my used code for inserting function GetSQLValueString($theValue, $theType, $theDefinedValue = "", $theNotDefinedValue = "") { if (PHP_VERSION < 6) { $theValue = get_magic_quotes_gpc() ? stripslashes($theValue) : $theValue; } $theValue = function_exists("mysql_real_escape_string") ? mysql_real_escape_string($theValue) : mysql_escape_string($theValue); switch ($theType) { case "text": $theValue = ($theValue != "") ? "'" . $theValue . "'" : "NULL"; break; case "long": case "int": $theValue = ($theValue != "") ? intval($theValue) : "NULL"; break; case "double": $theValue = ($theValue != "") ? doubleval($theValue) : "NULL"; break; case "date": $theValue = ($theValue != "") ? "'" . $theValue . "'" : "NULL"; break; case "defined": $theValue = ($theValue != "") ? $theDefinedValue : $theNotDefinedValue; break; } return $theValue; } include("Connections/mzk_mdc.php"); $ext = 1; $website = "mzk"; $mzk_sql=sprintf("INSERT INTO downloads (image, `by`, `rapid_title`, title, `description`, category, div_id, topic_url, down_times, ext, `website`) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)", GetSQLValueString($topic_thumb_image, "text"), GetSQLValueString($topic_by, "text"), GetSQLValueString($topic_des, "text"), GetSQLValueString($topic_title, "text"), GetSQLValueString($forum_content, "text"), GetSQLValueString($topic_category, "text"),GetSQLValueString($topic_div, "text"),GetSQLValueString($forum_link, "text") ,GetSQLValueString($topic_down_times, "int"),GetSQLValueString($ext, "int"), GetSQLValueString($website, "text")); mysql_select_db($database_mdc, $mdc); $mzk_result = mysql_query($mzk_sql, $mdc) or die("can not do more"); mysql_close($mdc);

    Read the article

  • Pre-populate iPhone Safari SQLite DB

    - by Matt Rogish
    I'm working with a PhoneGap app that uses Safari local storage (SQlite DB) via Javascript: http://developer.apple.com/safari/library/documentation/iPhone/Conceptual/SafariJSDatabaseGuide/UsingtheJavascriptDatabase/UsingtheJavascriptDatabase.html On first load, the app creates the database, tables, and populates the data via a series of INSERT statements. If the user closes the app while this processing is happening, then my app database is left in an inconsistent state. What I prefer to do is deploy the SQLite DB as part of my iTunes App packaging so nothing must be populated at app cold start. However, I'm not sure if that is possible -- all of the google hits for this topic that I can find are referring to the core-data provided SQLite which is not what we're using... If it's not possible, could I wrap the entire thing in a transaction and keep re-trying it when the app is restarted? Failing that, I guess I can create a simple table with one boolean column "is_app_db_loaded?" and set it to true after I've processed all my inserts. But that's really gross... Ideas? Thanks!!

    Read the article

  • SqlCE DB occasionally freezes on one handheld, not another

    - by Michael
    I have two types of custom handhelds which are similar, but slightly different, each running the same WinForm application and a WinCE database: Type 1: WinCE 4.2, 400 mhz, 93244 kb Type 2: WinCE 5.0, 520 mhz, 84208 kb Type 1 will happily proceed through a large batch db operation (initiated) by the app, by Type 2 will consistently begin c-r-a-w-l-i-n-g (for several to many cycles) at around the 200 cycle mark. As several points it will begin running normally and then crawl again. The app does several db op's (inserts, updates and selects, no deletes). To simplify my situation, I've built a small test app which essentially does this: command_s.CommandText = "select dvr from vr where vid = 2211250"; command_u.CommandText = "update pvr set LocationID=81 where Status='OK' and vri = 27861"; while(going) { command_s.ExecuteScalar(); command_u.ExecuteNonQuery(); } and set it off running on the two units side by side. Sure enough, the slower (400 mhz) unit is outpacing the faster (520 mhz) unit (it's about 5000 cycles ahead right now) and I can see noticable pauses on the 520 mhz unit. What is causing this?

    Read the article

  • How do people handle foreign keys on clients when synchronizing to master db

    - by excsm
    Hi, I'm writing an application with offline support. i.e. browser/mobile clients sync commands to the master db every so often. I'm using uuid's on both client and server-side. When synching up to the server, the servre will return a map of local uuids (luid) to server uuids (suid). Upon receiving this map, clients updated their records suid attributes with the appropriate values. However, say a client record, e.g. a todo, has an attribute 'list_id' which holds the foreign key to the todos' list record. I use luids in foreign_keys on clients. However, when that attribute is sent over to the server, it would dirty the server db with luids rather than the suid the server is using. My current solution, is for the master server to keep a record of the mappings of luids to suids (per client id) and for each foreign key in a command, look up the suid for that particular client and use the suid instead. I'm wondering wether others have come across thus problem and if so how they have solved it? Is there a more efficient, simpler way? I took a look at this question "Synchronizing one or more databases with a master database - Foreign keys (5)" and someone seemed to suggest my current solution as one option, composite keys using suids and autoincrementing sequences and another option using -ve ids for client ids and then updating all negative ids with the suids. Both of these other options seem like a lot more work. Thanks, Saimon

    Read the article

  • trouble connecting to MySql DB (PHP)

    - by user332817
    Hi I have the following PHP code to connect to my db. <?php ob_start(); $host="localhost"; // Host name $username="root"; // Mysql username $password=""; // Mysql password $db_name="test"; // Database name $tbl_name="members"; // Table name // Connect to server and select databse. mysql_connect("$host", "$username", "$password")or die("cannot connect"); ?> however I get the following error: Warning: mysql_connect() [function.mysql-connect]: [2002] A connection attempt failed because the connected party did not (trying to connect via tcp://localhost:3306) in C:\Program Files (x86)\EasyPHP-5.3.2i\www\checklogin.php on line 11 Warning: mysql_connect() [function.mysql-connect]: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. in C:\Program Files (x86)\EasyPHP-5.3.2i\www\checklogin.php on line 11 Fatal error: Maximum execution time of 30 seconds exceeded in C:\Program Files (x86)\EasyPHP-5.3.2i\www\checklogin.php on line 11 I am able to add a db/tables via phpmyadmin but I cant connect using php. here is a screenshot of my phpmyadmin page: http://img294.imageshack.us/img294/1589/sqls.jpg any help would be appreciated, thanks in advance.

    Read the article

  • mailing system DB structure, need help

    - by Anna
    i have a system there user(sender) can write a note to friends(receivers), number of receivers=0. Text of the message is saved in DB and visible to sender and all receivers then they login to system. Sender can add more receivers at any time. More over any of receivers can edit the message and even remove it from DB. For this system i created 3 tables, shortly: users(userID, username, password) messages(messageID, text) list(id, senderID, receiverID, messageID) in table "list" each row corresponds to pair sender-receiver, like sender_x_ID -- receiver_1_ID -- message_1_ID sender_x_ID -- receiver_2_ID -- message_1_ID sender_x_ID -- receiver_3_ID -- message_1_ID Now the problem is: 1. if user deletes the message from table "messages" how to automatically delete all rows from table "list" which correspond to deleted message. Do i have to include some foreign keys? More important: 2. if sender has let say 3 receivers for his message1 (username1, username2 and username3) and at certain moment decides to add username4 and username5 and at the same time exclude username1 from the list of receivers. PHP code will get the new list of receivers (username2, username3, username4, username5) That means insert to table "list" sender_x_ID -- receiver_4_ID -- message_1_ID sender_x_ID -- receiver_5_ID -- message_1_ID and also delete from table "list" the row corresponding to user1 (which is not in the list or receivers any more) sender_x_ID -- receiver_1_ID -- message_1_ID which sql query to send from PHP to make it in an easy and intelligent way? Please help! Examples of sql queries would be perfect!

    Read the article

  • rake db:migrate fails when trying to do inserts

    - by anthony
    I'm trying to get a database populate so I can begin working on a project. THis project is already built and I'm being brought in to helpwith front-end work. Problem is I can't get rake db:migrate to do any inserts. Every time I run rake db:migrate I get this: ... == 20081220084043 CreateTimeDimension: migrating ============================== -- create_table(:time_dimension) - 0.0870s INSERT time_dimension(time_key, year, month, day, day_of_week, weekend, quarter) VALUES(20080101, 2008, 1, 1, 'Tuesday', false, 1) rake aborted! Could not load driver (uninitialized constant Mysql::Driver) ... I'm building on a MBP with Snow Leopard. I've installed XCode from the disk that comes with the mac. I've updated ruby, installed rails and all the needed gems. I have the 64 bit version of MySQL installed. I've tried the 32 bit version of MySQL and I've even tried installing from macports (via http://www.robbyonrails.com/articles/2010/02/08/installing-ruby-on-rails-passenger-postgresql-mysql-oh-my-zsh-on-snow-leopard-fourth-edition) The mysql gemis installed using: sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/path/to/mysql/bin/mysql_config the migrate creates the tables just fine but it dies every. single. time. it trys an insert. Any help would be great

    Read the article

  • Parsing XHTML results from Bing

    - by Nir
    Hello, i am trying to parse received search queries from bing search engines which are received in xhtml in java. I am using sax XmlReader to read the results but i keep on getting errors. here is my code-this one is for the hadler of the reader: import org.xml.sax.Attributes; import org.xml.sax.SAXException; import org.xml.sax.helpers.DefaultHandler; public class XHTMLHandler extends DefaultHandler{ public XHTMLHandler() { super(); } public void startDocument () { System.out.println("Start document"); } public void endDocument () { System.out.println("End document"); } public void startElement (String uri, String name,String qName, Attributes atts) { if ("".equals (uri)) System.out.println("Start element: " + qName); else System.out.println("Start element: {" + uri + "}" + name); } public void endElement (String uri, String name, String qName) { if ("".equals (uri)) System.out.println("End element: " + qName); else System.out.println("End element: {" + uri + "}" + name); } public void startPrefixMapping (String prefix, String uri) throws SAXException { } public void endPrefixMapping (String prefix) throws SAXException { } public void characters (char ch[], int start, int length) { System.out.print("Characters: \""); for (int i = start; i < start + length; i++) { switch (ch[i]) { case '\\': System.out.print("\\\\"); break; case '"': System.out.print("\\\""); break; case '\n': System.out.print("\\n"); break; case '\r': System.out.print("\\r"); break; case '\t': System.out.print("\\t"); break; default: System.out.print(ch[i]); break; } } System.out.print("\"\n"); } } and this is the program itself: import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.io.OutputStreamWriter; import java.net.HttpRetryException; import java.net.HttpURLConnection; import java.net.URL; import org.xml.sax.InputSource; import org.xml.sax.SAXException; import org.xml.sax.XMLReader; import org.xml.sax.helpers.XMLReaderFactory; public class Searching { private String m_urlBingSearch = "http://www.bing.com/search?q="; private HttpURLConnection m_httpCon; private OutputStreamWriter m_streamWriter; //private BufferedReader m_bufferReader; private URL m_serverAdress; private StringBuilder sb; private String m_line; private InputSource m_inputSrc; public Searching() { m_httpCon = null; m_streamWriter = null; //m_bufferReader = null; m_serverAdress = null; sb = null; m_line = new String(); } public void SearchBing(String searchPrms) throws SAXException,IOException { //set up connection sb = new StringBuilder(); sb.append(m_urlBingSearch); sb.append(searchPrms); m_serverAdress = new URL(sb.toString()); m_httpCon = (HttpURLConnection)m_serverAdress.openConnection(); m_httpCon.setRequestMethod("GET"); m_httpCon.setDoOutput(true); m_httpCon.setConnectTimeout(10000); m_httpCon.connect(); //m_streamWriter = new OutputStreamWriter(m_httpCon.getOutputStream()); //m_bufferReader = new BufferedReader(new InputStreamReader(m_httpCon.getInputStream())); XMLReader reader = XMLReaderFactory.createXMLReader(); XHTMLHandler handle = new XHTMLHandler(); reader.setContentHandler(handle); reader.setErrorHandler(handle); //reader.startPrefixMapping("html", "http://www.w3.org/1999/xhtml"); handle.startPrefixMapping("html", "http://www.w3.org/1999/xhtml"); m_inputSrc = new InputSource(m_httpCon.getInputStream()); reader.parse(m_inputSrc); m_httpCon.disconnect(); } public static void main(String [] args) throws SAXException,IOException { Searching s = new Searching(); s.SearchBing("beatles"); } } this is my error message: Exception in thread "main" java.io.IOException: Server returned HTTP response code: 503 for URL: http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLEntityManager.setupCurrentEntity(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLEntityManager.startEntity(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLEntityManager.startDTDEntity(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDTDScannerImpl.setInputSource(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$DTDDriver.dispatch(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$DTDDriver.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(Unknown Source) at Searching.SearchBing(Searching.java:57) at Searching.main(Searching.java:65) can someone please help? i think it has something to do with dtd but i don't know hot to fix it

    Read the article

  • Need help with transferring data between MySQL db's using PHP

    - by JM4
    In one of the sites I manage, the client has decided to take on ACH/Bank Account administration where it was previously outsourced. As a result, the information submitted in our online form which used to simply store in a single database for processing now must sit in 'limbo' until the funds used for payment have been verified. My original plan is as follows: At the end of an enrollment, all form data is collected and stored in a single MySQL database. Our internal administrator will receive an email notification reminding him enrollments have taken place. He will process the ACH information collected and wait the 3-4 business days needed for payment to clear. Once the payment information has been returned as Good (haven't considered what I will do with the 'bad' yet), the administrator can log into a secure portal which allows him to click a button to 'process' the full information once compared and verified. the process is simplified as: Enrollment complete: data stored in DB 'A' Funds verified and link clicked: data from 'A' is copied to DB 'B' and 'A' is deleted. I have run similar processes with CSV output before and simply used //transfers old data to archive $transfer = mysql_query('INSERT INTO '.$archive.' SELECT * FROM '.$table) or die(mysql_error()); //empties existing table $query = mysql_query('TRUNCATE TABLE '.$table) or die(mysql_error()); but in those cases, ALL data returned was copied and deleted. I only want to copy and delete a single record. Any idea how to accomplish this?

    Read the article

  • Add to exisiting db values, rather than overwrite - PDO

    - by sam
    Im trying to add to existing decimal value in table, for which im using the sql below: UPDATE Funds SET Funds = Funds + :funds WHERE id = :id Im using a pdo class to handle my db calls, with the method below being used to update the db, but i couldnt figure out how to amend it to output the above query, any ideas ? public function add_to_values($table, $info, $where, $bind="") { $fields = $this->filter($table, $info); $fieldSize = sizeof($fields); $sql = "UPDATE " . $table . " SET "; for($f = 0; $f < $fieldSize; ++$f) { if($f > 0) $sql .= ", "; $sql .= $fields[$f] . " = :update_" . $fields[$f]; } $sql .= " WHERE " . $where . ";"; $bind = $this->cleanup($bind); foreach($fields as $field) $bind[":update_$field"] = $info[$field]; return $this->run($sql, $bind); }

    Read the article

  • Location of DB models in Zend Framework - want them centralized

    - by jeffkolez
    Maybe I've been staring at the problem too long and it's much simpler than I think, but I'm stuck right now. I have three websites that are going to share database models. I've structured my applications so that I have an application directory for each site and a public directory for each site. The DB models live in a directory in the library along with Zend Framework and my third party libraries. I use the Autoloader class and when I try to instantiate one of my DB classes, it fails. The library directory is in my include path, but for whatever reason it refuses to instantiate my classes. It will work if I have my models in my application directory, but that's not the point. They're supposed to be shared classes in a Library. $model = new Model_Login(); $model->hello_world(); This fails when its in the library. The class is just a test: class Model_Login { public function hello_world() { echo "hello world"; } } Everything works until I try to instantiate one of my models. I've even tried renaming the class to something else (Db_Login), but that doesn't work either. Any ideas? Thanks in advance.

    Read the article

  • Cannot translate date formats from rails form to mysql db

    - by Steve
    I have a simple search form in rails 3 that has two date fields. I'm having a problem getting these dates into my mysql db. I've tried using the american_date gem, specifying date formats in my initializers, in the config/locales/en.yml file, and directly on the date on the date fields themselves. Currently, I'm setting the rails-approved date format in the view - <%= f.text_field :depart_date, :value=> Date.today.strftime('%Y-%m-%d') %> The dateformat in my DB is also YYYY-mm-dd, so things should be going smoothly. The console tells me that the two date fields are both class = "Date" I think I've found the disconnect. From the logs - Started POST "/searches" for 127.0.0.1 at 2013-10-30 17:43:26 -0400 Processing by SearchesController#create as HTML Parameters: {"utf8"=>"v","search"=>{"depart_date"=>"2013-10-30", "return_date"=>"2013-11-09"} ?[1m?[35m (0.0ms)?[0m BEGIN ?[1m?[36mSQL (0.0ms)?[0m ?[1mINSERT INTO `searches` (`depart_date`,`return_date`) VALUES ('2013-30-10','2013-09-11')?[0m ?[1m?[35m (2.0ms)?[0m COMMIT Note that the month and day values are switched in the insert statement. How can I prevent this from happenening?

    Read the article

  • Getting selected row in inputListOfValues returnPopupListener

    - by Frank Nimphius
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Model driven list-of-values in Oracle ADF are configured on the ADF Business component attribute which should be updated with the user value selection. The value lookup can be configured to be displayed as a select list, combo box, input list of values or combo box with list of values. Displaying the list in an af:inputListOfValues component shows the attribute value in an input text field and with an icon attached to it for the user to launch the list-of-values dialog. The list-of-values dialog allows users to use a search form to filter the lookup data list and to select an entry, which return value then is added as the value of the af:inputListOfValues component. Note: The model driven LOV can be configured in ADF Business Components to update multiple attributes with the user selection, though the most common use case is to update the value of a single attribute. A question on OTN was how to access the row of the selected return value on the ADF Faces front end. For this, you need to know that there is a Model property defined on the af:inputListOfValues that references the ListOfValuesModel implementation in the model. It is the value of this Model property that you need to get access to. The af:inputListOfValues has a ReturnPopupListener property that you can use to configure a managed bean method to receive notification when the user closes the LOV popup dialog by selecting the Ok button. This listener is not triggered when the cancel button is pressed. The managed bean signature can be created declaratively in Oracle JDeveloper 11g using the Edit option in the context menu next to the ReturnPopupListener field in the PropertyInspector. The empty method signature looks as shown below public void returnListener(ReturnPopupEvent returnPopupEvent) { } The ReturnPopupEvent object gives you access the RichInputListOfValues component instance, which represents the af:inputListOfValues component at runtime. From here you access the Model property of the component to then get a handle to the CollectionModel. The CollectionModel returns an instance of JUCtrlHierBinding in its getWrappedData method. Though there is no tree binding definition for the list of values dialog defined in the PageDef, it exists. Once you have access to this, you can read the row the user selected in the list of values dialog. See the following code: public void returnListener(ReturnPopupEvent returnPopupEvent) {   //access UI component instance from return event RichInputListOfValues lovField =        (RichInputListOfValues)returnPopupEvent.getSource();   //The LOVModel gives us access to the Collection Model and //ADF tree binding used to populate the lookup table ListOfValuesModel lovModel =  lovField.getModel(); CollectionModel collectionModel =          lovModel.getTableModel().getCollectionModel();     //The collection model wraps an instance of the ADF //FacesCtrlHierBinding, which is casted to JUCtrlHierBinding   JUCtrlHierBinding treeBinding =          (JUCtrlHierBinding) collectionModel.getWrappedData();     //the selected rows are defined in a RowKeySet.As the LOV table only   //supports single selections, there is only one entry in the rks RowKeySet rks = (RowKeySet) returnPopupEvent.getReturnValue();     //the ADF Faces table row key is a list. The list contains the //oracle.jbo.Key List tableRowKey = (List) rks.iterator().next();   //get the iterator binding for the LOV lookup table binding   DCIteratorBinding dciter = treeBinding.getDCIteratorBinding();   //get the selected row by its JBO key   Key key = (Key) tableRowKey.get(0); Row rw =  dciter.findRowByKeyString(key.toStringFormat(true)); //work with the row // ... }

    Read the article

  • How to Use RDA to Generate WLS Thread Dumps At Specified Intervals?

    - by Daniel Mortimer
    Introduction There are many ways to generate a thread dump of a WebLogic Managed Server. For example, take a look at: Taking Thread Dumps - [an excellent blog post on the Middleware Magic site]or  Different ways to take thread dumps in WebLogic Server (Document 1098691.1) There is another method - use Remote Diagnostic Agent! The solution described below is not documented, but it is relatively straightforward to execute. One advantage of using RDA to collect the thread dumps is RDA will also collect configuration, log files, network, system, performance information at the same time. Instructions 1. Not familiar with Remote Diagnostic Agent? Take a look at my previous blog "Resolve SRs Faster Using RDA - Find the Right Profile" 2. Choose a profile, which includes the WebLogic Server data collection modules (for example the profile "WebLogicServer"). At RDA setup time you should see the prompt below: ------------------------------------------------------------------------------- S301WLS: Collects Oracle WebLogic Server Information ------------------------------------------------------------------------------- Enter the location of the directory where the domains to analyze are located (For example in UNIX, <BEA Home>/user_projects/domains or <Middleware Home>/user_projects/domains) Hit 'Return' to accept the default (/oracle/11AS/Middleware/user_projects/domains) > For a successful WLS connection, ensure that the domain Admin Server is up and running. Data Collection Type:   1  Collect for a single server (offline mode)   2  Collect for a single server (using WLS connection)   3  Collect for multiple servers (using WLS connection) Enter the item number Hit 'Return' to accept the default (1) > 2 Choose option 2 or 3. Note: Collect for a single server or multiple servers using WLS connection means that RDA will attempt to connect to execute online WLST commands against the targeted server(s). The thread dumps are collected using the WLST function - "threadDumps()". If WLST cannot connect to the managed server, RDA will proceed to collect other data and ignore the request to collect thread dumps. If in the final output you see no Thread Dump menu item, then it's likely that the managed server is in a state which prevents new connections to it. If faced with this scenario, you would have to employ alternative methods for collecting thread dumps. 3. The RDA setup will create a setup.cfg file in the RDA_HOME directory. Open this file in an editor. You will find the following parameters which govern the number of thread dumps and thread dump interval. #N.Number of thread dumps to capture WREQ_THREAD_DUMP=10 #N.Thread dump interval WREQ_THREAD_DUMP_INTERVAL=5000 The example lines above show the default settings. In other words, RDA will collect 10 thread dumps at 5000 millisecond (5 second) intervals. You may want to change this to something like: #N.Number of thread dumps to capture WREQ_THREAD_DUMP=10 #N.Thread dump interval WREQ_THREAD_DUMP_INTERVAL=30000 However, bear in mind, that such change will increase the total amount of time it takes for RDA to complete its run. 4. Once you are happy with the setup.cfg, run RDA. RDA will collect, render, generate and package all files in the output directory. 5. For ease of viewing, open up the RDA Start html file - "xxxx__start.htm". The thread dumps can be found under the WLST Collections for the target managed server(s). See screenshots belowScreenshot 1:RDA Start Page - Main Index Screenshot 2: Managed Server Sub Index Screenshot 3: WLST Collections Screenshot 4: Thread Dump Page - List of dump file links Screenshot 5: Thread Dump Dat File Link Additional Comment: A) You can view the thread dump files within the RDA Start Page framework, but most likely you will want to download the dat files for in-depth analysis via thread dump analysis tools such as: Thread Dump Analyzer -  Samurai - a GUI based tail , thread dump analysis tool If you are new to thread dump analysis - take a look at this recorded Support Advisor Webcast  Oracle WebLogic Server: Diagnosing Performance Issues through Java Thread Dumps[Slidedeck from webcast in PDF format]B) I have logged a couple of enhancement requests for the RDA Development Team to consider: Add timestamp to dump file links, dat filename and at the top of the body of the dat file Package the individual thread dumps in a zip so all dump files can be conveniently downloaded in one go.

    Read the article

  • Architect Day: Boston - Agenda Update

    - by Bob Rhubart
    Here's the latest information on the session schedule and content for Oracle Technology Network Architect Day in Boston, MA on September 12, 2012. Registration is open, but seating is limited. When: September 12, 2012 8:30am – 5:00pm Where: Boston Marriott Burlington One Burlington Mall Road Burlington, MA 01803 Register now Agenda Time Session Title Room 8:30 am - 9:00 am Registration and Continental Breakfast Salon E Foyer 9:00 am - 9:15 am Welcome and Opening Comments | Bob Rhubart Salon E 9:15 am - 10:00 am Engineered Systems: Oracle's Vision for the Future | Ralf Dossmann Oracle's Exadata and Exalogic are impressive products in their own right. But working in combination they deliver unparalleled transaction processing performance with up to a 30x increase over existing legacy systems, with the lowest cost of ownership over a 3 or 5 year basis than any other hardware. In this session you'll learn how to leverage Oracle's Engineered Systems within your enterprise to deliver record-breaking performance at the lowest TCO. Salon E 10:00 am - 10:30 am Securing Public and Private Clouds | Anton Nielsen Long before the term "Cloud Computing" existed, Oracle technologies supported and promoted the concept. Centralized data with remote users has been at the core of these technologies for decades. The public cloud, and extending private clouds to the internet, though, has added security challenges never imagined decades ago. This presentation will examine a real life security breach and introduce architecture, technologies and policies to secure public and private clouds.  Salon E 10:30 am - 10:45 am Break 10:45 am - 11:30 am Breakout Sessions (pick one) Cloud Computing - Making IT Simple | Scott Mattoon The road to Cloud Computing is not without a few bumps. This session will help to smooth out your journey by tackling some of the potential complications. We'll examine whether standardization is a prerequisite for the Cloud. We'll look at why refactoring isn't just for application code. We'll check out deployable entities and their simplification via higher levels of abstraction. And we'll close out the session with a look at engineered systems and modular clouds. Salon E Innovations in Grid Computing with Oracle Coherence | Rob Misek Learn how Coherence can increase the availability, scalability and performance of your existing applications with its advanced low-latency data-grid technologies. Also hear some interesting industry-specific use cases that customers had implemented and how Oracle is integrating Coherence into its Enterprise Java stack. Salon C 11:30 am - 12:15 pm Breakout Sessions (pick one) Enterprise Strategy for Cloud Security | Dave Chappelle Security is high on the list of concerns for many organizations as they evaluate their cloud computing options. This session will examine security in the context of the various forms of cloud computing. We'll consider technical and non-technical aspects of security, and discuss several strategies for cloud computing, from both the consumer and producer perspectives. Salon E Oracle Enterprise Manager | Avi Huber Much more than a DB management tool, Oracle Enterprise Manager provides management and monitoring coverage for the entire Oracle stack, and beyond. This session will concentrate on the middleware management functionality in OEM, starting with Real User Experience monitoring, through AppServer management, and into deep-dive Java diagnostics. We’ll discuss Business Driven Application Management (BDAM) and the benefits of top-down monitoring. Lastly, we’ll demonstrate how to trace a specific user experience problem, through a multitier SOA application, to its root cause, deep in the JVM. Salon C 12:15 pm - 1:15 pm Lunch Salon E Foyer 1:15 pm - 2:00 pm Panel Discussion - Q&A with session speakers Salon E 2:00 pm - 2:45 pm Breakout Sessions (pick one) Oracle Cloud Reference Architecture | Anbu Krishnaswamy Cloud initiatives are beginning to dominate enterprise IT roadmaps. Successful adoption of Cloud and the subsequent governance challenges warrant a Cloud reference architecture that is applied consistently across the enterprise. This presentation will answer the important questions: What exactly is a Cloud, why you need it, what changes it will bring to the enterprise, and what are the key capabilities of a Cloud infrastructure are - using Oracle's Cloud Reference Architecture, which is part of the IT Strategies from Oracle (ITSO) Cloud Enterprise Technology Strategy (ETS). Salon E 21st Century SOA | Peter Belknap Service Oriented Architecture has evolved from concept to reality in the last decade. The right methodology coupled with mature SOA technologies has helped customers demonstrate success in both innovation and ROI. In this session you will learn how Oracle SOA Suite's orchestration, virtualization, and governance capabilities provide the infrastructure to run mission critical business and system applications. And we'll take a special look at the convergence of SOA & BPM using Oracle's Unified technology stack. Salon C 2:45 pm - 3:00 pm Break 3:00 pm - 4:00 pm Roundtable Discussion Salon E 4:00 pm - 4:15 pm Closing Comments & Readouts from Roundtables Salon E 4:15 pm - 5:00 pm Networking / Reception Salon E Foyer Note: Session schedule and content subject to change.

    Read the article

  • Extending Database-as-a-Service to Provision Databases with Application Data

    - by Nilesh A
    Oracle Enterprise Manager 12c Database as a Service (DBaaS) empowers Self Service/SSA Users to rapidly spawn databases on demand in cloud. The configuration and structure of provisioned databases depends on respective service template selected by Self Service user while requesting for database. In EM12c, the DBaaS Self Service/SSA Administrator has the option of hosting various service templates in service catalog and based on underlying DBCA templates.Many times provisioned databases require production scale data either for UAT, testing or development purpose and managing DBCA templates with data can be unwieldy. So, we need to populate the database using post deployment script option and without any additional work for the SSA Users. The SSA Administrator can automate this task in few easy steps. For details on how to setup DBaaS Self Service Portal refer to the DBaaS CookbookIn this article, I will list steps required to enable EM 12c DBaaS to provision databases with application data in two distinct ways using: 1) Data pump 2) Transportable tablespaces (TTS). The steps listed below are just examples of how to extend EM 12c DBaaS and you can even have your own method plugged in part of post deployment script option. Using Data Pump to populate databases These are the steps to be followed to implement extending DBaaS using Data Pump methodolgy: Production DBA should run data pump export on the production database and make the dump file available to all the servers participating in the database zone [sample shown in Fig.1] -- Full exportexpdp FULL=y DUMPFILE=data_pump_dir:dpfull1%U.dmp, data_pump_dir:dpfull2%U.dmp PARALLEL=4 LOGFILE=data_pump_dir:dpexpfull.log JOB_NAME=dpexpfull Figure-1:  Full export of database using data pump Create a post deployment SQL script [sample shown in Fig. 2] and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned Normal 0 -- Full importdeclare    h1   NUMBER;begin-- Creating the directory object where source database dump is backed up.    execute immediate 'create directory DEST_LOC as''/scratch/nagrawal/OracleHomes/oradata/INITCHNG/datafile''';-- Running import    h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'FULL', job_name => 'DB_IMPORT10');    dbms_datapump.set_parallel(handle => h1, degree => 1);    dbms_datapump.add_file(handle => h1, filename => 'IMP_GRIDDB_FULL.LOG', directory => 'DATA_PUMP_DIR', filetype => 3);    dbms_datapump.add_file(handle => h1, filename => 'EXP_GRIDDB_FULL_%U.DMP', directory => 'DEST_LOC', filetype => 1);    dbms_datapump.start_job(handle => h1);    dbms_datapump.detach(handle => h1);end;/ Figure-2: Importing using data pump pl/sql procedures Using DBCA, create a template for the production database – include all the init.ora parameters, tablespaces, datafiles & their sizes SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In “Additional Configuration Options” step of Customize “Create Database Deployment Procedure” flow, provide the name of the SQL script in the Custom Script section and lock the input (shown in Fig. 3). Continue saving the deployment procedure. Figure-3: Using Custom script option for calling Import SQL Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also  populate the data using the post deployment step. Using Transportable tablespaces to populate databases Copy of all user/application tablespaces will enable this method of populating databases. These are the required steps to extend DBaaS using transportable tablespaces: Production DBA needs to create a backup of tablespaces. Datafiles may need conversion [such as from Big Endian to Little Endian or vice versa] based on the platform of production and destination where DBaaS created the test database. Here is sample backup script shows how to find out if any conversion is required, describes the steps required to convert datafiles and backup tablespace. SSA Administrator should copy the database (tablespaces) backup datafiles and export dumps to the backup location accessible from the hosts participating in the database zone(s). Create a post deployment SQL script and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned. Here is sample post deployment SQL script using transportable tablespaces. Using DBCA, create a template for the production database – all the init.ora parameters should be included. NOTE: DO NOT choose to bring tablespace data into this template as they will be created SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In the “Additional Configuration Options” step of the flow, provide the name of the SQL script in the Custom Script section and lock the input. Continue saving the deployment procedure. Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also populate the data using the post deployment step. More Information: Database-as-a-Service on Exadata Cloud Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Financial Management: Why Move to the Cloud?

    - by Kathryn Perry
    A guest post by Terrance Wampler, Vice President, Financials Product Strategy, Oracle I’ve spent my career designing and developing financial management systems, most of it at Oracle. Every single day I either meet with our customers or talk to them on the phone. The time is usually spent discussing various business challenges facing CFOs and Controllers, who are running Oracle’s Financials. Lately, we’ve been talking a lot about cloud computing and whether it makes sense for finance to go to the cloud. Here are some pros and cons that might help you make that decision. Let’s start with the benefits of cloud solutions. The first is savings. With cloud services, you pay only for those commodities that you use. That makes you feel like you're getting better value for your money. Plus, you can preserve your cash for your core business and you can get a better matching of expenses and revenues. So, at the top of the list is lower total cost of ownership. The second point has to do with optimization. With cloud services, you’ll need less IT infrastructure so you can optimize your IT resources for better-value, higher-end projects. This also leads to greater financial visibility, where there's a clear cost for the set of services or features replaced by cloud services. And, the last benefit is what I call acceleration. You can save money by speeding up the initialization and deployment of the project. You don't have to deal with IT infrastructure and you can start implementing right away. We did a quick survey of about 70 CFOs at the CFO Summit last month in New York City. We asked them why they were looking at cloud services, and not necessarily just for financials. The No. 1 response was perceived lower cost of ownership. But of course there are risks to consider. The first thing most people think about in the cloud is security and ownership of data. So, will your data really be safe? Can you meet your own privacy policy requirements? Do you really want your private financial data exposed? Do you trust the provider? Is what you see really your data? Do you own it or is it managed by someone else? Security is a big concern that comes with an emotional component. The next thing in the risk category is reliability. Is the provider proven? You’re taking what you have control over – for example, standards and policies and internal service level agreements – away from your IT department and giving it to someone else. Will you still be able to adapt to shifts in your business? Will the provider be able to grow with your business effectively? Reliability means having a provider that can give you the service infrastructure that you need. And then there’s performance, which has two components in terms of risk. Going forward, will the provider be able to scale the infrastructure or service level if you have new employees or new businesses? And second, will the price you negotiate and the rate you lock in cover additional costs and rising service fees? Another piece is cost. What happens if you don't get the service level you want? What if you end the service? What happens, if after a few years, you send the service out for bid and change service? Can you move your data? Can you move the applications? Do the integrations work? These are cost components people don’t always take into account. And, the final piece is the business case. The perception is that you can get started really quickly with cloud. It has a perceived lower cost of total ownership and it feels cool because it's cloud. But do you have a good business case for moving to the cloud? Your total cost of ownership is over three years; then you’ll renew it, so your TCO is six years. Have you compared that to other internal services that you’re offering? You might already have product that you can run this new business or division on. In that same survey at the CFO Summit, the execs thought the biggest perceived risks were security of data, ability to move data back, and the ability to create a business case to actually justify the risks. So that’s the list of pros and cons. Not to leave you hanging, I will do another post on how to balance these pros and cons and make the right decision for your business.

    Read the article

  • The Arab HEUG is now a reality, and other random thoughts

    - by user9147039
    I just returned from Doha, Qatar where the first of its kind HEUG (Higher Education User Group) meeting for institutions in the Middle East and North Africa was held at Qatar University and jointly hosted by Damman University from Saudi Arabia. Over 80 delegates attended including representation from education institutions in Oman, Saudi Arabia, Lebanon, and Qatar. There are many other regional HEUG organizations in place (in Australia/New Zealand, APAC, EMEA, as well as smaller regional HEUG’s in the Netherlands, South Africa, and in regions of the US), but it was truly an accomplishment to see this Middle East/North Africa group organize and launch their chapter with a meeting of this quality. To be known as the Arab HEUG going forward, I am excited about the prospects for sharing between the institutions and for the growth of Oracle solutions in the region. In particular the hosts for the event (Qatar University) did a masterful job with logistics and organization, and the quality of the event was a testament to their capabilities. Among the more interesting and enlightening presentations I attended were one from Dammam University on the lessons learned from their implementation of Campus Solutions and transition off of Banner, as well as the use by Qatar University E-business Suite for grants management (both pre-and post-award). The most notable fact coming from this latter presentation was the fit (89%) of e-Business Suite Grants to the university’s requirements. In a few weeks time we will be convening the 5th meeting of the Oracle Education & Research Industry Strategy Council in Redwood Shores (5th since my advent into my current role). The main topics of discussion will be around our Higher Education Applications Strategy for the future (including cloud approaches to ERP (HCM, Finance, and Student Information Systems), how some cases studies on the benefits of leveraging delivered functionality and extensibility in the software (versus customization). On the second day of the event we will turn our attention to Oracle in Research and also budgeting and planning in higher education. Both of these sessions will include significant participation from council members in the form of panel discussions. Our EVP’s for Systems (John Fowler) and for Global Cloud Services and North America application sales (Joanne Olson) will join us for the discussion. I recently read a couple of articles that were surprising to me. The first was from Inside Higher Ed on October 15 entitled, “As colleges prepare for major software upgrades, Kuali tries to woo them from corporate vendors.” It continues to disappointment that after all this time we are still debating whether it is better to build enterprise software through open or community source initiatives when fully functional, flexible, supported, and widely adopted options exist in the marketplace. Over a decade or more ago when these solutions were relatively immature and there was a great deal of turnover in the market I could appreciate the initiatives like Kuali. But let’s not kid ourselves – the real objective of this movement is to counter a perceived predatory commercial software industry. Again, when commercial solutions are deployed as written without significant customization, and standard business processes are adopted, the cost of these solutions (relative to the value delivered) is quite low, and certain much lower than the massive investment (and risk) in in-house developers to support a bespoke community source system. In this era of cost pressures in education and the need to refocus resources on teaching, learning, and research, I believe it’s bordering on irresponsible to continue to pursue open-source ERP. Many of the adopter’s total costs are staggering and have little to show for their efforts and expended resources. The second article was recently in the Chronicle of Higher Education and was entitled “’Big Data’ Is Bunk, Obama Campaign’s Tech Guru Tells University Leaders.” This one was so outrageous I almost don’t want to legitimize it by referencing it here. In the article the writer relays statements made by Harper Reed, President Obama’s former CTO for his 2012 re-election campaign, that big data solutions in education have no relevance and are akin to snake oil. He goes on to state that while he’s a fan of data-driven decision making in education, most of the necessary analysis can be accomplished in Excel spreadsheets. Yeah… right. This is exactly what ails education (higher education in particular). Dozens of shadow and siloed systems running on spreadsheets with limited-to-no enterprise wide initiatives to harness the data-rich environment that is a higher ed institution and transform the data into useable information. I’ll grant Mr. Reed that “Big Data” is overused and hackneyed, but imperatives like improving student success in higher education are classic big data problems that data-mining and predictive analytics can address. Further, higher ed need to be producing a massive amount more data scientists and analysts than are currently in the pipeline, to further this discipline and application of these tools to many many other problems across multiple industries.

    Read the article

< Previous Page | 702 703 704 705 706 707 708 709 710 711 712 713  | Next Page >