Search Results

Search found 67143 results on 2686 pages for 'complex data types'.

Page 650/2686 | < Previous Page | 646 647 648 649 650 651 652 653 654 655 656 657  | Next Page >

  • Should I redo an abandoned project with Lightswitch?

    - by Elson
    I had a small project that I was doing on the side. It was basically a couple of forms linked to a DB. Access was out, because it was a specifically meant to be a web application. Being a small project I used ASP.NET Dynamic Data, but, for various reasons, the project ended before deployment. I met the client recently, and he said there was a need for it still. I'm considering restarting the project with Dynamic Data, but I've seen some Lightswitch demos, and was suitably impressed with the BETA. I will wait for RTM if I use it, but is it a good idea to use Lightswitch to replace the Dyanmic Data? The amount of work I put into the Dynamic Data site isn't really an issue. Additional information: It's a system that tracks production in a small factory, broken down by line, machine, section and will generate reports. I would guess that the data structure will remain fairly constant over time, but that the reporting requirements will grow. The other thing is that the factory is part of a larger group, and I'm hopeful that, if this system succeeds, similar work with be forthcoming for other factories.

    Read the article

  • EntityDataSource Control Basics

    The Entity Framework can be easily used to create websites based on ASP.NET. The EntityDataSource control, which is one of a set of Web Server Datasource controls, can be used to to bind an Entity Data Model (EDM) to data-bound controls on the page. Thse controls can be editable grids, forms, drop-down list controls and master-detail pages which can then be used to create, read, update, and delete data. Joydip tells you what you need to get started.

    Read the article

  • Postgres backup

    - by Abbass
    Hello, I have a Bacula script that does an automatic backup of a Postgres Database. The script makes two backups using (pg_dump) of the data base : The schema only and the data only. /usr/bin/pg_dump --format=c -s $dbname --file=$DUMPDIR/$dbname.schema.dump /usr/bin/pg_dump --format=c -a $dbname --file=$DUMPDIR/$dbname.data.dump The problem is that I can't figure out how to restore it with pg_restore. Do I need to create the database and the users before then restore the schema and finally the data. I did the following : pg_restore --format=c -s -C -d template1 xxx.schema.dump pg_restore --format=c -a -d xxx xxx.data.dump This first restore creates the database with emtpy tables but the second gives many error like this one : pg_restore: [archiver (db)] COPY failed: ERROR: insert or update on table "Table1" violates foreign key constraint "fkf6977a478dd41734" DETAIL: Key (contentid)=(1474566) is not present in table "Table23". Any ideas?

    Read the article

  • Developing with Oracle ADF Mobile and ADF Business Components Backend

    - by Shay Shmeltzer
    It's great to finally have the Oracle ADF Mobile solution out there. If you are not familiar with ADF Mobile - it basically lets you build applications that run on iOS and Android devices using the concepts you already know - components based UI constructions (same idea as JSF), taskflows, data controls, Java and of course JDeveloper. I created one demo that shows how to build an on-device application that gets data from local Java files (that run on the device - yes we do Java on iOS too) - you can see it here. However, one thing many of you might be wondering is how can you get data from your database into these mobile applications. Well if you already built your data access with Oracle ADF Business Components then here is a two step video demo that shows you what to do. The steps are: 1. Expose ADF Business Components as Services 2. Create an ADF Mobile application that consumes the above services with the Web service data control Simple right? That's the whole point of ADF Mobile - making on device application development as simple as possible. Try it out on your device.

    Read the article

  • Combo box filter on Extjs4.1.3

    - by saravanakumar
    I have created a window with combo box. Combo configuration is xtype :'combo', fieldLabel : 'Command', labelAlign : 'right', id :'commandInputComboId', store : commandStore, displayField: 'command', valueField : 'id', width : 500, enableKeyEvents : true, allowblank : false, queryMode: 'local', typeAhead : true, triggerAction: 'all', query filter works for normal data. But I have data with escape chars like &lt; because I need to show it as '<' for ex my data is <get-all-users> Filter apply only when I type &lt; not for <. How can apply filter on this data?

    Read the article

  • Getting Started With Knockout.js

    - by Pawan_Mishra
    Client side template binding in web applications is getting popular with every passing day. More and more libraries are coming up with enhanced support for client side binding. jQuery templates is one very popular mechanism for client side template bindings. The idea with client side template binding is simple. Define the html mark-up with appropriate place holder for data. User template engines like jQuery template to bind the data(JSON formatted data) with the previously defined mark-up.In this...(read more)

    Read the article

  • What are the advantages of storing xml in a relational database?

    - by Chris
    I was poking around the AdventureWorks database today and I noticed that a number of tables (HumanResources.JobCandidate and Sales.Individual for example) have a column which is storing xml data. What I would to know is, what is the advantage of storing basically a database table row's worth of data in another table's column? Doesn't this make it difficult to query off of this information? Or is the assumption that the data won't need to be queried and just needs to be stored?

    Read the article

  • OSXplanet: Updating cloud images

    - by Turgs
    Hello I'm using OSXplanet (which is a mac app based on xplanet) to view satellite and rendered images of the earth (or other planets) as the desktop background on my laptop. The images and data refresh regularly as data is updated, such as cyclones, clouds, weather, volcanoes, etc. I have Cloud Data set to refresh every 3 hours, but the resulting image never changes. By default, OSXplanet seems to be trying to get image data from Iowa State University. Can I modify OSXplanet to set it to pull cloud image data from a different server location listed on http://xplanet.sourceforge.net/clouds.php? Thanks Turgs

    Read the article

  • MSM Merge Modules in Visual Studio 2013 [on hold]

    - by theGreenCabbage
    Could someone please let me know where I might find resources for creating MSM files? While I am able to create MSI files using InstallShield, it seems that Visual Studio no longer supports Merge Module Projects, judging by the link below and the screenshot of my version of Visual Studio 2013 - http://msdn.microsoft.com/en-us/library/z6z02ts5(v=vs.80).aspx To create a new merge module project: On the File menu, point to Add, then click New Project. In the resulting Add New Project dialog box, in the Project types pane, open the Other Project Types node and select Setup and Deployment Projects. In the Templates pane, choose Merge Module Project.

    Read the article

  • Convincing Upper Management the need of larger monitors for Developers

    - by The Rubber Duck
    The company I work for has recently hired on several developers, and there are a limited number of monitors to go around. There are two types in the office - a standard 15" (thankfully flatscreen) and a widescreen 23". No developer has a machine capable of a dual monitor setup, and the largest monitors went to the people who got here first. Three or four new senior level developers only have a 15" monitor to work on. To make matters worse, there are perhaps a total of 25-30 DBAs/Testers/Admin types in the company who all have dual screen 23" setups. We have brought the issue to management, and they refuse to take away large monitors from people who have been here for years for the sake of new employees, even if they are senior level. We have pitched the idea of testers sacrificing a large monitor for one of our small ones, but they won't go for that either. What can I say to management to illustrate the need of monitors for developers?

    Read the article

  • My first blog post…

    - by steveh99999
    I’ve been meaning to start a blog for a while now, (OK, for several years…..) - finally now, here it begins First post, something really simple but, a wise-man once told me about the best way to improve SQL server performance. Store Less Data. That's it.. that's all there is to it... Over the years, I've seen the following :- -  a 200Gb database which held 3 days data. Once business requirements changed, we were able to hold only 1 days data in this database. -  a table developed by DBAs to hold application table cardinality information - that information was collected at 2 hour intervals every day for 7 years ! After 7 years the DBA space-info table had become the largest table in the database - 60 million rows !  It was a simple change to remove alot of the historical intra-day data and change the schedule to run only once per evening. Suddenly that table held 6 million rows instead of 60 million.... - lots of backup and restore history held in msdb. See this post by Brent Ozar for more details on this issue. Imagine how much faster the backups, DBCC Checks and reindexes ran when the above 3 changes were implemented ?   How often do you review your big databases \ tables to see if you’re actually holding only data that is really required by the business ?

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • What are the advantages of storing xml in a relational database?

    - by Chris
    I was poking around the AdventureWorks database today and I noticed that a number of tables (HumanResources.JobCandidate and Sales.Individual for example) have a column which is storing xml data. What I would to know is, what is the advantage of storing basically a database table row's worth of data in another table's column? Doesn't this make it difficult to query off of this information? Or is the assumption that the data won't need to be queried and just needs to be stored?

    Read the article

  • Announcing Oracle MDM YouTube Channel!

    - by Michelle Kimihira
    We are excited about new Oracle MDM YouTube channel where you can watch videos related to Master Data Management. You will find product videos and customer videos. Be sure to subscribe to the channel, so you don't miss out! Spend a moment to visit us at: http://www.youtube.com/oraclemdm. Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter Read and Subscribe to our bi-monthly Data Integration and Master Data Management Newsletter

    Read the article

  • What Design Pattern is seperating transform converters

    - by RevMoon
    For converting a Java object model into XML I am using the following design: For different types of objects (e.g. primitive types, collections, null, etc.) I define each its own converter, which acts appropriate with respect to the given type. This way it can easily extended without adding code to a huge if-else-then construct. The converters are chosen by a method which tests whether the object is convertable at all and by using a priority ordering. The priority ordering is important so let's say a List is not converted by the POJO converter, even though it is convertable as such it would be more appropriate to use the collection converter. What design pattern is that? I can only think of a similarity to the command pattern.

    Read the article

  • How do I count Internal Logical Files (ILF) and External Inputs (EI) for a dynamic form entry page?

    - by DmytroL
    Assuming I have an applicant information entry screen, the number and types of fields on which can be defined by the system administrator, how do I go about counting the number of Internal Logical Files (ILFs) and Data Element Types (DETs) for the related data functions? So far I have come up with something like this: ILF #1 (control information): Field Metadata, 1 RET, ~3 DET (name, type, mandatory) ILF #2 (business data): Applicant Data, most likely 1 RET, but how many DET? Of course I could count it as 2 DET (Field ref, Value), but I am not sure that would be correct And when it comes to an External Input (EI), say, "Add New Applicant", things become even more complicated, because the number of DET corresponding to the user-editable fields is totally dependent on the control information in ILF #1, and I am out of ideas here... Anyone fancy to help with that? Thanks in advance!

    Read the article

  • How does the fstab 'defaults' option work? Is relatime recommended?

    - by hushs
    I know the fstab defaults option means this: rw,suid,dev,exec,auto,nouser,async. But what if I want to add one more option, for example relatime, should I still add defaults too or they are applied anyway? Is it needed to add at least one option? Some examples: 1. UUID=bfb42838-d866-4233-9679-96e7536356df /media/data ext3 defaults 0 2 2. UUID=bfb42838-d866-4233-9679-96e7536356df /media/data ext3 0 2 3. UUID=bfb42838-d866-4233-9679-96e7536356df /media/data ext3 defaults,relatime 0 2 4. UUID=bfb42838-d866-4233-9679-96e7536356df /media/data ext3 relatime 0 2 Is the (2) correct(no option at all)? Are the (1) and (2) the same? Are the (3) and (4) the same? Furthermore, I read in the Ubuntu Community Documentation that in Ubuntu 8.04 relatime was used as default for linux native file systems. Is it still true for 12.04? If yes, then why do I see this if I use the mount command: /dev/sda2 on / type ext4 (rw,errors=remount-ro) If no, why not? It isn't recommended to use relatime now? I just wanted to apply it to my non system partitions, it is a good idea? EDIT: I found an other command to list the mounted partitions and their options: cat /proc/mounts This is the result of a partition mounted with the defaults option in fstab: /dev/sdb2 /media/adat ext3 rw,relatime,errors=continue,barrier=1,data=ordered 0 0 This is the output of mount for the same partition: /dev/sdb2 on /media/adat type ext3 (rw) And here is both result if the same partition mounted from Nautilus as a non-root user: /dev/sdb2 /media/adat ext3 rw,nosuid,nodev,relatime,errors=continue,barrier=1,data=ordered 0 0 /dev/sdb2 on /media/adat type ext3 (rw,nosuid,nodev,uhelper=udisks) So it looks like relatime is used if we mount an ext partition in 12.04. So it is unneeded to add it manually. So my problem is broadly solved. But I still can't see why the options that should be in the defaults are not listed even with the cat /proc/mounts. Maybe there is a third and even better method to list the partition mount options :)

    Read the article

  • AppFabric named cache, what happens if you lose a cache host?

    - by Liam
    I'm getting my head around how app fabric clustering works and there's something I'm not sure about. Given a structure where we have one named cache, with two lead hosts and (say) three cache hosts and high availability turned off and the lead host(s) performing the management role. When one cache host goes down do you loose the data that was on that cache host? In this MSDN article it states: Data on the non-lead hosts would be lost (assuming high availability was not enabled), but the rest of the cluster could continue serving and storing data But I was unsure if redundancy is built into the system. Would you loose x amount of data or would one of the other cache hosts store this data also and pick up the slack?

    Read the article

  • MySQL cluster: 20Tb x 3K tables

    - by ethrbunny
    Over the next 2-3 years we will be scaling up data collection for a project. As a result the amount of data will grow 10-fold. Our current MySQL installation can keep up with the 2Tb of data but for larger queries there is a fair amount of IOWait. Im investigating a migration to a clustered solution to spread out the IO but am wondering about NDB and what happens to data that doesn't get accessed very often. The impression I get from reading about MySQL cluster is that it relies on memory tables for most of the data. What happens with tables that don't get accessed very often (or at all)? And how does backup work? Can I use MYSQLDUMP or is there a better solution?

    Read the article

  • What's Happening in Business Analytics at OpenWorld 2012?

    - by jmorourke
    Oracle OpenWorld 2012 is rapidly approaching on September 30th when we take over the city of San Francisco for five days.  The Business Analytics this year is our strongest ever with over 150 EPM, BI, Analytics and Data Warehousing sessions delivered by Oracle, our customers and partners.  We’ll also have Hands-On Labs, 20 demo pods dedicated to Business Analytics products, and over 30 partners exhibiting their solutions.  So what’s hot in the Business Analytics program at OpenWorld?  Here are some of the “can’t miss” sessions at this year’s conference: The EPM and BI general sessions, led by SVP of Product Development Balaji Yelamanchili will highlight what’s new provide a view into Oracle’s EPM, BI and Analytics strategies.  Both sessions are scheduled on Monday, October 1st. Thursday Keynote:  See More, Act Faster:  Oracle Business Analytics, led by Oracle President Mark Hurd, will provide a view into Oracle’s strategy for Business Analytics, especially engineered systems designed to provide extreme performance for the most rigorous analytic tasks. Superfast Business Intelligence with Oracle Exalytics.  Hear about various business intelligence scenarios in which Oracle Exalytics provides exemplary value—from operational reporting and prepackaged applications to analytics on unstructured data. Turn Insights into Real-Time Actions with Oracle Business Intelligence Mobile.  Learn how Oracle Business Intelligence Mobile enables organizations to deliver relevant information and turn insight into real-time action, no matter where employees are located. Empowering the Business User: Introduction to Oracle Endeca Information Discovery.  Find out how you can find fast answers to the new questions that confront your business every day, while avoiding the confusion and inconsistencies brought about by spreadsheets and desktop tools. Big Data:  The Big Story.  Learn how to harness big data, your existing data, and predictive analytics to make better decisions in an environment of rapid shifts in behavior and instant feedback.  Learn about the technologies that constitute a big data architecture, how to leverage and implement advanced analytics for real-time decisions, and the tools needed to know the unknown. Planning at the Speed of Business with Oracle Exalytics.  Learn how Oracle Hyperion Planning leverages the power of Oracle Exalytics to do planning faster, with more detail and more users than ever. For more details on these and other Business Analytics sessions at OpenWorld, download the Focus On Business Analytics program guide at:  http://www.oracle.com/openworld/focus-on/index.html We look forward to seeing you in San Francisco!

    Read the article

  • Reporting what's not there

    It's easy to write queries that will show data in the database that matches a criteria. However, if no data in the database matches the criteria, it becomes more difficult. This article examines two different scenarios where it's necessary to create data in order to be able to report zero values in queries.

    Read the article

  • PHP Fingerprinting CMS Versions by their meta tags [migrated]

    - by Mud
    Hey guys I'm having some issues with the speed of my script. I'm a novice I know so getting past that - what suggestions would you have to speed up my script? I was originally just reading in the index.php and then searching the <head> of the page for an array of strings. Then I read about the get_meta_tags and went that way. Then I had issues with some sites having 300 redirects in place so I used curl to check the URL existed and to speed up things but it's still taking 5 minutes or so to execute. <?php function url_exist($url){ $c=curl_init(); curl_setopt($c,CURLOPT_URL,$url); curl_setopt($c,CURLOPT_HEADER,1); curl_setopt($c,CURLOPT_NOBODY,1); curl_setopt($c,CURLOPT_RETURNTRANSFER,1); curl_setopt($c,CURLOPT_FRESH_CONNECT,1); if(!curl_exec($c)){ return false; }else{ return true; } curl_close($c); } function checkVersion($url){ $tags = get_meta_tags($url); if (is_array($tags) && array_key_exists('generator', $tags)) { $v = "<span style='background-color:#7BF55D;color:#A3A0A0'>".$tags['generator']."</span"; }else{ $v="<span style='background-color:#F55D67;color:#A3A0A0'>Metatag not found!</span>"; } return $v; } $row = 1; echo "<table>"; if (($handle = fopen("url.csv", "r")) !== FALSE) { while (($data = fgetcsv($handle, 1000, ",")) !== FALSE) { $num = count($data); $row++; for ($c=0; $c < $num; $c++) { if(url_exist($data[$c])){ echo "<tr><td>".$data[$c]."</td><td>".checkVersion($data[$c])."</td></tr>"; sleep(2); }else{ echo "<tr><td>".$data[$c]."</td><td><td><span style='background-color:#F55D5D;color:#A3A0A0'>URL not valid!<span></td></tr>"; } } } fclose($handle); } echo "</table>"; ?>

    Read the article

  • Gone in 60 Seconds: An Insecure Database is an Easy Target

    - by Troy Kitch
    According to the recent Verizon Data Breach Investigations Report, 98% of breached data originates from database servers and nearly half are compromised in less than a minute! Almost all victims are not even aware of a breach until a third party notifies them and nearly all breaches could have been avoided through the use of basic controls. Join us for this November 28th webcast to learn more about the evolving threats to databases that have resulted in over 1 billion stolen records. Also, hear how organizations can mitigate risks by adopting a defense-in-depth strategy that focuses on basic controls to secure data at the source - the database. There's no turning back the clock on stolen data, but you can put in place controls to ensure your organization won't be the next headline. Note, this webcast will be recorded for on-demand access after November 28th. 

    Read the article

< Previous Page | 646 647 648 649 650 651 652 653 654 655 656 657  | Next Page >