Search Results

Search found 37875 results on 1515 pages for 'version space'.

Page 765/1515 | < Previous Page | 761 762 763 764 765 766 767 768 769 770 771 772  | Next Page >

  • ERRNO 5 Input/Output Error

    - by CCarey
    Going up for my first ubuntu installation and encountered a critical error. Mind that I am installing on my Macbook Pro, and have already removed all other partitions. (I'm installing with a CD) My Ubuntu version is: "ubuntu-12.10-desktop-i386" So once the installation gets to something around "Finishing copying files", a great big "[ERRNO5] Input/Output Error" pops up on the screen. Obviously this halts and crashes the whole installation. Now, I've already run a disk check, memtest, and cpu load test, and all came up green. I have also redownloaded ubuntu twice, md5 match both times, and burnt four disks. None got past this error. If anyone could help me out, that would be greatly appreciated! Cheers!

    Read the article

  • How to Automate your Database Documentation

    - by Jonathan Hickford
    In my previous post, “Automating Deployments with SQL Compare command line” I looked at how teams can automate the deployment and post deployment validation of SQL Server databases using the command line versions of Red Gate tools. In this post I’m looking at another use for the command line tools, namely using them to generate up-to-date documentation with every database change. There are many reasons why up-to-date documentation is valuable. For example when somebody new has to work on or administer a database for the first time, or when a new database comes into service. Having database documentation reduces the risks of making incorrect decisions when making changes. Documentation is very useful to business intelligence analysts when writing reports, for example in SSRS. There are a couple of great examples talking about why up to date documentation is valuable on this site:  Database Documentation – Lands of Trolls: Why and How? and Database Documentation Using SQL Doc. The short answer is that it can save you time and reduce risk when you need that most! SQL Doc is a fast simple tool that automatically generates database documentation. It can create documents in HTML, Word or pdf files. The documentation contains information about object definitions and dependencies, along with any other information you want to associate with each object. The SQL Doc GUI, which is included in Red Gate’s SQL Developer Bundle and SQL Toolbelt, allows you to add additional notes to objects, and customise which objects are shown in the docs.  These settings can be saved as a .sqldoc project file. The SQL Doc command line can use this project file to automatically update the documentation every time the database is changed, ensuring that documentation that is always up to date. The simplest way to keep documentation up to date is probably to use a scheduled task to run a script every day. However if you have a source controlled database, or are using a Continuous Integration (CI) server or a build server, it may make more sense to use that instead. If  you’re using SQL Source Control or SSDT Database Projects to help version control your database, you can automatically update the documentation after each change is made to the source control repository that contains your database. To get this automation in place,  you can use the functionality of a Continuous Integration (CI) server, which can trigger commands to run when a source control repository has changed. A CI server will also capture and save the documentation that is created as an artifact, so you can always find the exact documentation for a specific version of the database. This forms an always up to date data dictionary. If you don’t already have a CI server in place there are several you can use, such as the free open source Jenkins or the free starter editions of TeamCity. I won’t cover setting these up in this article, but there is information about using CI servers for automating database tasks on the Red Gate Database Delivery webpage. You may be interested in Red Gate’s SQL CI utility (part of the SQL Automation Pack) which is an easy way to update a database with the latest changes from source control. The PowerShell example below shows how to create the documentation from a database. That database might be your integration database or a shared development database that is always up to date with the latest changes. $serverName = "server\instance" $databaseName = "databaseName" # If you want to document multiple databases use a comma separated list $userName = "username" $password = "password" # Path to SQLDoc.exe $SQLDocPath = "C:\Program Files (x86)\Red Gate\SQL Doc 3\SQLDoc.exe" $arguments = @( "/server:$($serverName)", "/database:$($databaseName)", "/username:$($userName)", "/password:$($password)", "/filetype:html", "/outputfolder:.", # "/project:$args[0]", # If you already have a .sqldoc project file you can pass it as an argument to this script. Values in the project will be overridden with any options set on the command line "/name:$databaseName Report", "/copyrightauthor:$([Environment]::UserName)" ) write-host $arguments & $SQLDocPath $arguments There are several options you can set on the command line to vary how your documentation is created. For example, you can document multiple databases or exclude certain types of objects. In the example above, we set the name of the report to match the database name, and use the current Windows user as the documentation author. For more examples of how you can customise the report from the command line please see the SQL Doc command line documentation If you already have a .sqldoc project file, or wish to further customise the report by including or excluding specific objects, you can use this project on the command line. Any settings you specify on the command line will override the defaults in the project. For details of what you can customise in the project please see the SQL Doc project documentation. In the example above, the line to use a project is commented out, but you can uncomment this line and then pass a path to a .sqldoc project file as an argument to this script.  Conclusion Keeping documentation about your databases up to date is very easy to set up using SQL Doc and PowerShell. By using a CI server to run this process you can trigger the documentation to be run on every change to a source controlled database, and keep historic documentation available. If you are considering more advanced database automation, e.g. database unit testing, change script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have any questions.

    Read the article

  • Cannot find install-sh, install.sh, or shtool in ac-aux

    - by Micah
    This is my first time trying to compile and install anything on a linux machine. I got the latest version of https://github.com/processone/exmpp via git and read the instructions which state: 2. Build and install Exmpp uses the Autotools. Therefore the process is quite common: $ ./configure $ make $ sudo make install after type ./configure I get the error Cannot find install-sh, install.sh, or shtool in ac-aux Google was of little to no help. Not sure at all what I'm supposed to do. Any help would be much appreciated

    Read the article

  • Configuring thouands of related products in Magento?

    - by Anonymous -
    I'm at a stage with a Magento store I'm developing where I've added all the products (all 6000 of them) and now would like to configure related products to up my conversion rate a bit. I was wondering if there was an extension anybody knew of that functions similarly to this one, with the most current version of Magento (Community Edition, 1.6.1). If not, would anyone be able to provide some pointers for writing a script that will run through each product and add 1-5 related products. I have a fairly basic idea of taking product title text and just doing a simple text similarity query between other product titles for now, just to get some related products up there, but the Magento database isn't making a terribly large amount of sense. Thanks to anyone who can shed some light on this. :)

    Read the article

  • Which SSL do I need?

    - by Maik Klein
    I need to buy a ssl certificate. Now there are so many different alternatives with a huge price range. I know the very basic differences of browser compatibility and security level. But I need a "cheap" ssl certificate. My homepage looks like this http://www.test.com Now if I go to the loginpage i should switch to https like this https:/www.test.com/login I am also considering to secure the whole site if the user has singed in. Now there are sites which are offering SSl for 7$/year. Would this do the job? Or would you recommend me to get something more expensive like this one? I want to add paypal support in a later version of my website and I don't want to save money on the wrong end. What would you recommend me?

    Read the article

  • How to tweet automatically when you push a new package to nuget.org

    - by Daniel Cazzulino
    Wouldn’t it be nice if your followers could be notified whenever you publish a new version of a NuGet package? Currently, nuget.org offers no support for this, but with the following tricks, you can get it working without programming. The essential idea is to use the OData feed that nuget.org exposes to build an RSS feed with new items as you publish them, and have IFTTT do the tweeting from it. The tools we’ll use to get this working are: LinqPad: to examine the nuget.org OData feed at https://nuget.org/api/v2  Yahoo Pipes: to tweak the OData feed output so that it looks like a “plain” feed IFTTT: to consume the pipe output and auto-tweet on new items   Exploring NuGet OData Feed with LinqPad In order to build the query that will become your tweets’ source, we will add a new connection in LinqPad by clicking on the “Add Connection” link:...Read full article

    Read the article

  • Here’s How to Download Windows 8 Release Preview Right Now

    - by The Geek
    Want to get the latest version of Windows 8 right now? This one is called the Release Preview, and it’s available for download right now. There’s a lot of little bugs resolved, the multi-monitor support has improved, and you should download it now. You can grab the regular installer, or you can get the ISO image and use it in a virtual machine. Download Windows 8 Release Preview in ISO format Download the Windows 8 Release Preview Here’s How to Download Windows 8 Release Preview Right Now HTG Explains: Why Linux Doesn’t Need Defragmenting How to Convert News Feeds to Ebooks with Calibre

    Read the article

  • How do I fix dependency problems with the kernel in apt?

    - by Jon
    When trying to install new packages, either manually or with muon, I get these errors: jon@jon-desktop:~/Apps/mendeleydesktop-1.5-dev4-linux-x86_64/bin$ sudo apt-get install kupfer [sudo] password for jon: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: kupfer : Depends: python-keybinder but it is not going to be installed Recommends: python-wnck but it is not going to be installed linux-headers-generic : Depends: linux-headers-3.2.0-20-generic but it is not installable linux-image-generic : Depends: linux-image-3.2.0-20-generic but it is not installable E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). jon@jon-desktop:~/Apps/mendeleydesktop-1.5-dev4-linux-x86_64/bin$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: linux-generic linux-headers-generic linux-image-generic The following packages will be upgraded: linux-generic linux-headers-generic linux-image-generic 3 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. 3 not fully installed or removed. Need to get 0 B/6,658 B of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.2.0-20-generic; however: Package linux-image-3.2.0-20-generic is not installed. dpkg: error processing linux-image-generic (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.2.0.20.22); however: Package linux-image-generic is not configured yet. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. dpkg: dependency problems prevent configuration of linux-headers-generic: linux-headers-generic depends on linux-headers-3.2.0-20-generic; however: Package linux-headers-3.2.0-20-generic is not installed. dpkg: error processing linux-headers-generic (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: linux-image-generic linux-generic linux-headers-generic E: Sub-process /usr/bin/dpkg returned an error code (1) As indicated above, I ran sudo apt-get -f install but it still tells me there are dependency issues.

    Read the article

  • Default Save Directory for gnome-screensaver?

    - by trent
    Are there any sort of configuration options for specifying the default save location for gnome-screenshot, or is this hard-coded into the source code? It used to be ~/Desktop, which seems to have changed to ~/Pictures (in 12.04). The only possible solution I've seen is about Setting the default name (as it includes time stamp information now instead of simply Screenshot#), but that solution doesn't really seem ideal to me. Also, this post suggested that the last save location is remembered the next time you take a screenshot, but in my experience, this doesn't seem to be the case. And in any case, following on from that, that entry in gconf-editor doesn't even seem to accurately reflect the last location, so more than likely an entry related to an older version of gnome-screenshot.

    Read the article

  • NetBeans Podcast 69

    - by TinuA
    Podcast Guests: Terrence Barr, Simon Ritter, Jaroslav Tulach (It's an all-Oracle lineup!) Download mp3: 47 Minutes – 39.5 mb Subscribe on iTunes NetBeans Community News with Geertjan and Tinu If you missed the first two Java Virtual Developer Day events in early May, there's still one more LIVE training left on May 28th. Sign up here to participate live in the APAC time zone or watch later ON DEMAND. Video: Get started with Vaadin development using NetBeans IDE NetBeans IDE was at JavaCro 2014 and at Hippo Get-together 2014 Another great lineup is in the works for NetBeans Day at JavaOne 2014. More details coming soon! NetBeans' Facebook page is almost at 40,000 Likes! Help us crack that milestone in the next few weeks! Other great ways to stay updated about NetBeans? Twitter and Google+. 09:28 / Terrence Barr - What to Know about Java Embedded Terrence Barr, a Senior Technologist and Principal Product Manager for Embedded and Mobile technologies at Oracle, discusses new features of the Java SE Embedded and Java ME Embedded platforms, and sheds some light on the differences between them and what they have to offer to developers. Learn more about Java SE Embedded Tutorial: Using Oracle Java SE Embedded Support in NetBeans IDE Learn more about Java ME Embedded Video: NetBeans IDE Support for Java ME 8 Video: Installing and Using Java ME SDK 8.0 Plugins in NetBeans IDE Follow Terrence Barr to keep up with news in the Embedded space: Blog and Twitter 26:02 / Simon Ritter - A Massive Serving of Raspberry Pi Oracle's Raspberry Pi virtual course is back by popular demand! Simon Ritter, the head of Oracle's Java Technology Evangelism team, chats about the second run of the free Java Embedded course (starting May 30th), what participants can expect to learn, NetBeans' support for Java ME development, and other Java trainings coming to a desktop, laptop or user group near you. Sign up for the Oracle MOOC: Develop Java Embedded Applications Using Raspberry Pi Find out when Simon Ritter and the Java Evangelism team are coming to a Java event or JUG in your area--follow them on Twitter: Simon Ritter Angela Caicedo Steven Chin Jim Weaver 36:58 / Jaroslav Tulach - A Perfect Translation Jaroslav Tulach returns to the NetBeans podcast with tales about the Japanese translation of the Practical API Design book, which he contends surpasses all previous translations, including the English edition! Order "Practical API Design" (Japanese Version)  Find out why the Japanese translation is the best edition yet *Have ideas for NetBeans Podcast topics? Send them to ">nbpodcast at netbeans dot org. *Subscribe to the official NetBeans page on Facebook! Check us out as well on Twitter, YouTube, and Google+.

    Read the article

  • Eclipse Indigo is here

    - by alexismp
    The yearly Eclipse update is here, and it's called Indigo. As with every release this is the synchronized release of a large number of projects : 62 this year. Some of the new features include Maven Integration (via M2E, a new project with this release), support for Hudson (via Mylyn), as well as the integration of EclipseLink 2.3 (which does multitenancy and more, see release page, blog). Support for Java 7 is expected for the September update release. The "Eclipse IDE for Java EE Developers" bundle is 210 MB. Support for GlassFish is available today as well! The GlassFish plugin now offers the ability to deploy to remote running GlassFish instances and supports version 3.0.x and 3.1.x (including recent promoted builds). That GlassFish plugin for Indigo also works for the earlier Helios release. The update to Oracle Enterprise Pack for Eclipse (OEPE) will come with the Indigo September (3.7.1) update. Here is some coverage for this major release: PressRelease, DZone, InfoQ.

    Read the article

  • libimobiledevice wants to remove all my other packages

    - by Dror Cohen
    When running the command sudo apt-get remove libimobiledevice2 I get: The following packages will be REMOVED: ... gdm gdm-guest-session gnome-power-manager gnome-session gnome-session-bin gvfs-backends indicator-power indicator-session kde-plasma-desktop kde-standard libgpod-common libgpod4 libimobiledevice2 nautilus-share ubuntu-desktop upower` Is it really nessecary to remove all of my KDE and Gnome packages? The source of the problem is that the installed oneric package doesn't recognize my ios 5.1 - so I wanted to switch to the latest and greatest (1.0.7 and if that's not good enough I'll go to the dev version 1.1.2). I'm using oneric 64bit.

    Read the article

  • How can I get ddclient to work with freedns?

    - by Rob Fisher
    I use the dynamic DNS service at freedns.afraid.org for my 12.04 server. I had assumed that the protocols would be standardised and that ddclient would just work, but apparently not. I get this message in /var/log/syslog: ERROR: Invalid update URL (2): unexpected status () I tried to use the updated version of ddclient from the alternative PPA described in this answer, but then I hit this error: FATAL: Error loading the Perl module Digest::SHA1 needed for freedns update. FATAL: On Debian, the package libdigest-sha1-perl must be installed. And when I try to install that package, I get this: E: Unable to locate package libdigest-sha1-perl Which leads me to this bug report, which apparently has no solution. How to proceed?

    Read the article

  • Is there an Android remote app compatible with LibreOffice Impress under Ubuntu?

    - by WarriorIng64
    I have an upcoming presentation I want to make in LibreOffice Impress on my Ubuntu 12.10 laptop. I was wondering if I could get ahold of an Android app which would act like a remote control, allowing me to switch between slides from my phone over WiFi without having to stay near the laptop (I've been told I need to move around more during my presentations). A quick look on the Google Play store seemed to turn up a handful of PowerPoint remote apps for Windows or maybe Mac. I also took a look at Can an Android phone control Ubuntu like a remote?, but that seems more for controlling an Ubuntu media center. Is there anything which will work with LibreOffice Impress on Ubuntu? My phone is a Samsung Galaxy Precedent on Straight Talk, running Android 2.2.2 (latest available version from the carrier). It's not rooted, and there's not much memory on it either, so the smaller and simpler the app the better. Also, it has to be free because I currently don't have a means to pay for an app I may/may not like.

    Read the article

  • How to use gps receiver bu-353

    - by Parimal
    Hi I have a gps receiver bu-353 with usb interface i want to know how can i use it under ubuntu I ran the following command gpsd -n -N -D 2 /dev/ttyUSB0 I got the output as: gpsd: launching (Version 2.94) gpsd: listening on port gpsd gpsd: running with effective group ID 1000 gpsd: running with effective user ID 1000 gpsd: opening GPS data source type 3 at '/dev/ttyUSB0' gpsd: speed 38400, 8N1 gpsd: Garmin: garmin_gps Linux USB module not active. gpsd: speed 9600, 8O1 gpsd: speed 38400, 8N1 gpsd: gpsd_activate(): opened GPS (fd 6) gpsd: speed 4800, 8N1 gpsd: NTPD ntpd_link_activate: 0 gpsd: /dev/ttyUSB0 identified as type SiRF binary (2.687608 sec @ 4800bps) gpsd: detaching 127.0.0.1 (sub 1, fd 8) in detach_client gpsd: detaching 127.0.0.1 (sub 1, fd 8) in detach_client after this i started tangoGPS, which said no gps and no gpsd found

    Read the article

  • How do I resolve not fully installed package (python3-setuptools)?

    - by user3737693
    I was trying to install python3-setuptools, and when i run $ sudo apt-get install python3-setuptools I get this error: Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 6 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up python3-setuptools (0.6.34-0ubuntu1) ... Traceback (most recent call last): File "/usr/bin/py3compile", line 36, in <module> from debpython import files as dpf File "/usr/share/python3/debpython/files.py", line 25, in <module> from debpython.pydist import PUBLIC_DIR_RE File "/usr/share/python3/debpython/pydist.py", line 28, in <module> from debpython.tools import memoize File "/usr/share/python3/debpython/tools.py", line 25, in <module> from datetime import datetime ImportError: /usr/bin/datetime.so: undefined symbol: _Py_ZeroStruct dpkg: error processing python3-setuptools (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: python3-setuptools E: Sub-process /usr/bin/dpkg returned an error code (1) I tried apt-get clean, apt-get autoclean, apt-get remove python3-setuptools, dpkg --remove python3-setuptools, apt-get install -f, dpkg -P --force-remove-reinstreq, dpkg -P --force-all --force-remove-reinstreq and dpkg --purge, but none of them worked. Output of sudo dpkg -P --force-all --force-remove-reinstreq python3-setuptools (Reading database ... 225309 files and directories currently installed.) Removing python3-setuptools ... Traceback (most recent call last): File "/usr/bin/py3clean", line 32, in <module> from debpython import files as dpf File "/usr/share/python3/debpython/files.py", line 25, in <module> from debpython.pydist import PUBLIC_DIR_RE File "/usr/share/python3/debpython/pydist.py", line 28, in <module> from debpython.tools import memoize File "/usr/share/python3/debpython/tools.py", line 25, in <module> from datetime import datetime ImportError: /usr/bin/datetime.so: undefined symbol: _Py_ZeroStruct dpkg: error processing python3-setuptools (--purge): subprocess installed pre-removal script returned error exit status 1 Traceback (most recent call last): File "/usr/bin/py3compile", line 36, in <module> from debpython import files as dpf File "/usr/share/python3/debpython/files.py", line 25, in <module> from debpython.pydist import PUBLIC_DIR_RE File "/usr/share/python3/debpython/pydist.py", line 28, in <module> from debpython.tools import memoize File "/usr/share/python3/debpython/tools.py", line 25, in <module> from datetime import datetime ImportError: /usr/bin/datetime.so: undefined symbol: _Py_ZeroStruct dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: python3-setuptools

    Read the article

  • A Small Utility to Delete Files recursively by Date

    - by Rick Strahl
    It's funny, but for me the following seems to be a recurring theme: Every few months or years I end up with a host of files on my server that need pruning selectively and often under program control. Today I realized that my SQL Server logs on my server were really piling up and nearly ran my backup drive out of drive space. So occasionally I need to check on that server drive and clean out files. Now with a bit of work this can be done with PowerShell or even a complicated DOS batch file, but heck, to me it's always easier to just create a small Console application that handles this sort of thing with a full command line parser and a few extra options, plus in the end I end up with code that I can actually modify and add features to as is invariably the case. No more searching for a script each time :-) So for my typical copy needs the requirements are: Need to recursively delete files Need to be able to specify a filespec (ie. *.bak) Be able to specify a cut off date before which to delete files And it'd be nice to have an option to send files to the Recycle bin just in case for operator error :-)(and yes that came in handy as I blew away my entire database backup folder by accident - oops!) The end result is a small Console file copy utility that I popped up on Github: https://github.com/RickStrahl/DeleteFiles The source code is up there along with the binary file you can just run. Creating DeleteFiles It's pretty easy to create a simple utility like DeleteFiles of course, so I'm not going to spend any talking about how it works. You can check it out in the repository or download and compile it. The nice thing about using a full programming language like C over something like PowerShell or batch file is that you can make short work of the recursive tree walking that's required to make this work. There's very little code, but there's also a very small, self-contained command line parser in there that might be useful that can be plugged into any project - I've been using it quite a bit for just about any Console application I've been building. If you're like me and don't have the patience or the persistence (that funky syntax requires some 'sticking with it' that I simply can't get over) to get into Powershell coding, having an executable file that I can just copy around or keep in my Utility directory is the only way I'll ever get to reuse this functionality without going on a wild search each time :-) Anyway, hope some of you might find this useful. © Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows  CSharp   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SQL Developer Data Modeler v3.3 Early Adopter: Collaborative Design via Excel?

    - by thatjeffsmith
    As you may have heard last week, we have a new version of Oracle SQL Developer Data Modeler now available as an Early Adopter release. Version 3.3 has quite a few new features and I’ll be previewing them here. Today’s topic is our new Excel integration. It builds off of last week’s lesson: Search, so you may want to go read that first. They say it takes a village to raise a child. I say it takes a team to build a data model. You have your techie folks, your business folks, your in-betweeners, and your database geeks. Who gets to define how customers are represented and stored in your database? That data lives forever, so you better get it right from the beginning, or you’ll be living in a hacker’s paradise for years to come. Lots of good rantings, ravings, and advice on this topic in general on Karen Lopez’s (@datachick) blog. But let’s say you are the primary modeler on a project. You dutifully interview the business folks for their requirements. You sit down and start to model and think you’re pretty close. Now you need someone to confirm your assumptions and provide some feedback. Do you send your model over? Take a screenshot and blow it up on a whiteboard? Export to HTML and let them take a magic marker to their monitors? Or maybe you bite the bullet and install your modeling software on their desktops and take the hours or days required to train them up on how to use the the tool. Wouldn’t it be nice if they could just mark up their corrections in Excel and let you suck the updates back in? This is what we have started to build in Oracle SQL Developer Data Modeler. Let’s say you have a new table called ‘UT_STARTUPS.’ It looks a little something like this: A table in Oracle SQL Developer Data Modeler What I would like to do is have my team or co-worker review how I have defined those columns. Perhaps TIMESTAMP is overkill or maybe the column names themselves aren’t up to snuff. What I am going to do is now search for all the columns in my table, then export that to Excel. So do a search for UT_STARTUPS. Search, filter, then Report With the filter set to ‘Columns,’ if I do a report I’ll be only getting the columns that are resolving to my search term. So as long as my table name is unique in the model, I should get what I’m looking for. Here’s what I see when I click on the Report button: XLS or XLSX, either format is just fine I want to decide how the Column data is exported to Excel though, so I’m going to create a report template that I can use going forward. So click the ‘Manage’ button and setup a new template. I’m going to call mine ‘CollaborativeDevelopment.’ The templates allow me to define what properties are included in the reports. Once this is set, I’ll have the XLS file generated, and get to work Now let the Excel junkies do their stuff Note that not ALL of the report properties are update-able (yes, I made up a new word there) via Excel. We’ll have the full list of properties documented going forward, but in my Excel sheet, note that I can’t change the table name or the data types for the columns. I’m going to update some column names and supply ‘nice’ comments so the database users know what’s what. Here’s my input for the designer/architect/database dude: Be kind, please rew…use comments. Save the file, email it back to your modeler. Update the model from Excel That’s right, it’s a right mouse click from your model in the tree If everything goes right, you’ll see a nice confirmation message: It’s alive! Another to-do item on tap – making this dialog more informative. We’ll be showing exactly what in your model was updated from Excel. Let’s take another look at the model now Voila! Why are we doing this again? The goal is to reduce the number of round-trips from the modeler and the business process owner. One is used to working with Excel – why not allow them to mark up their changes in the tool they already know? This is an early adopter release and I anticipate this feature getting a good bit of tuning up before we release. Why don’t you download 3.3, give it a whirl, and let us know what you think?

    Read the article

  • Java Spotlight Episode 138: Paul Perrone on Life Saving Embedded Java

    - by Roger Brinkley
    Interview with Paul Perrone, founder and CEO of Perrone Robotics, on using Java Embedded to test autonomous vehicle operations for the Insurance Institute for Highway Safety that will save lives. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link: Java Spotlight Podcast in iTunes. Show Notes News JDK 8 is Feature Complete Java SE 7 Update 25 Released What should the JCP be doing? 2013 Duke's Choice Award Nominations Another Quick update to Code Signing Article on OTN Events June 24, Austin JUG, Austin, TX June 25, Virtual Developer Day - Java, EMEA, 10AM CEST Jul 16-19, Uberconf, Denver, USA Jul 22-24, JavaOne Shanghai, China Jul 29-31, JVM Summit Language, Santa Clara Sep 11-12, JavaZone, Oslo, Norway Sep 19-20, Strange Loop, St. Louis Sep 22-26 JavaOne San Francisco 2013, USA Feature Interview Paul J. Perrone is founder/CEO of Perrone Robotics. Paul architected the Java-based general-purpose robotics and automation software platform known as “MAX”. Paul has overseen MAX’s application to rapidly field self-driving robotic cars, unmanned air vehicles, factory and road-side automation applications, and a wide range of advanced robots and automaton applications. He fielded a self-driving autonomous robotic dune buggy in the historic 2005 Grand Challenge race across the Mojave desert and a self-driving autonomous car in the 2007 Urban Challenge through a city landscape. His work has been featured in numerous televised and print media including the Discovery Channel, a theatrical documentary, scientific journals, trade magazines, and international press. Since 2008, Paul has also been working as the chief software engineer, CTO, and roboticist automating rock star Neil Young’s LincVolt, a 1959 Lincoln Continental retro-fitted as a fully autonomous extended range electric vehicle. Paul has been an engineer, author of books and articles on Java, frequent speaker on Java, and entrepreneur in the robotics and software space for over 20 years. He is a member of the Java Champions program, recipient of three Duke Awards including a Gold Duke and Lifetime Achievement Award, has showcased Java-based robots at five JavaOne keynotes, and is a frequent JavaOne speaker and show floor participant. He holds a B.S.E.E. from Rutgers University and an M.S.E.E. from the University of Virginia. What’s Cool Shenandoah: A pauseless GC for OpenJDK

    Read the article

  • SQL SERVER – Find Weekend and Weekdays from Datetime in SQL Server 2012

    - by pinaldave
    Yesterday we had very first SQL Bangalore User Group meeting and I was asked following question right after the session. “How do we know if today is a weekend or weekday using SQL Server Functions?” Well, I assume most of us are using SQL Server 2012 so I will suggest following solution. I am using SQL Server 2012′s CHOOSE function. It is SELECT GETDATE() Today, DATENAME(dw, GETDATE()) DayofWeek, CHOOSE(DATEPART(dw, GETDATE()), 'WEEKEND','Weekday', 'Weekday','Weekday','Weekday','Weekday','WEEKEND') WorkDay GO You can use the choose function on table as well. Here is the quick example of the same. USE AdventureWorks2012 GO SELECT A.ModifiedDate, DATENAME(dw, A.ModifiedDate) DayofWeek, CHOOSE(DATEPART(dw, A.ModifiedDate), 'WEEKEND','Weekday', 'Weekday','Weekday','Weekday','Weekday','WEEKEND') WorkDay FROM [Person].[Address] A GO If you are using an earlier version of the SQL Server you can use a CASE statement instead of CHOOSE function. Please read my earlier article which discusses CHOOSE function and CASE statements. Logical Function – CHOOSE() – A Quick Introduction Reference:  Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DateTime, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Introducing the Oracle MDM Blog - Why All MDM Solutions Aren't Equal

    - by ken.pulverman
    Welcome to the Oracle MDM Blog.  Dave Butler, Tony Ouk, and myself - Ken Pulverman, will be bringing you news and information from the world of MDM at Oracle.  Dave is our resident expert with more than 30 years of experience in data and information management. Tony has deep expertise in our Exadata product line which provides a strong hardware synergy with MDM.  I come from Siebel Systems where I helped found the team that built our integration product line and then our Universal Customer Master with is part of our MDM offering at Oracle. I thought I'd hit the ground running with a topic we are going to want to continue to bend your ear about.  We had a recent meeting with Ford Goodman, our head of MDM commercial sales in the US and he was very fired up about and important topic.  He's irked that all MDM solutions get painted with the same brush even though they aren't the same at all. There are companies out there trying to represent frameworks and toolkits as out of the box solutions.  They give you the pleasure (read pain) of doing things like developing your own multi-application data model, building your own web services, or creating your own APIs.  Huh?  What gets sold as flexibility in reality is a barrier to ever going live.  At Siebel Systems we obsessed over the notion of a customer.  Our data model took over 10 years to perfect as defining a customer is a very complex task indeed.  There are divisions, subsidiaries, branches, acquisitions, sites etc., etc., etc..  You'll want to do your homework, but trust me - you aren't going to want to take the time or resource to build these canonical data structures yourself.  And what about APIs?  Again, it sounds flexible.  In reality it's a lot of work. Our DNA at Oracle is to reduce the cost of information technology so we pre-integrate our technology with all of our major applications and pre-build integrations and connectors for all the major systems you work with.  This is tedious work that requires detailed knowledge of the interfaces of all the applications involved.  It is also version specific as the interface features and technology are always changing.  We have a substantial organization to manage this complexity so you don't have to.  Suffice to say, we'd like to help our customers peel back the rhetoric of companies that fly the MDM flag without a real offering that you can quickly benefit from. Please watch this space for more information on this storyline as well as news and information around Oracle MDM.

    Read the article

  • Getting Started With Sinatra

    - by Liam McLennan
    Sinatra is a Ruby DSL for building web applications. It is distinguished from its peers by its minimalism. Here is hello world in Sinatra: require 'rubygems' require 'sinatra' get '/hi' do "Hello World!" end A haml view is rendered by: def '/' haml :name_of_your_view end Haml is also new to me. It is a ruby-based view engine that uses significant white space to avoid having to close tags. A hello world web page in haml might look like: %html %head %title Hello World %body %div Hello World You see how the structure is communicated using indentation instead of opening and closing tags. It makes views more concise and easier to read. Based on my syntax highlighter for Gherkin I have started to build a sinatra web application that publishes syntax highlighted gherkin feature files. I have found that there is a need to have features online so that customers can access them, and so that they can be linked to project management tools like Jira, Mingle, trac etc. The first thing I want my application to be able to do is display a list of the features that it knows about. This will happen when a user requests the root of the application. Here is my sinatra handler: get '/' do feature_service = Finding::FeatureService.new(Finding::FeatureFileFinder.new, Finding::FeatureReader.new) @features = feature_service.features(settings.feature_path, settings.feature_extensions) haml :index end The handler and the view are in the same scope so the @features variable will be available in the view. This is the same way that rails passes data between actions and views. The view to render the result is: %h2 Features %ul - @features.each do |feature| %li %a{:href => "/feature/#{feature.name}"}= feature.name Clearly this is not a complete web page. I am using a layout to provide the basic html page structure. This view renders an <li> for each feature, with a link to /feature/#{feature.name}. Here is what the page looks like: When the user clicks on one of the links I want to display the contents of that feature file. The required handler is: get '/feature/:feature' do @feature_name = params[:feature] feature_service = Finding::FeatureService.new(Finding::FeatureFileFinder.new, Finding::FeatureReader.new) # TODO replace with feature_service.feature(name) @feature = feature_service.features(settings.feature_path, settings.feature_extensions).find do |feature| feature.name == @feature_name end haml :feature end and the view: %h2= @feature.name %pre{:class => "brush: gherkin"}= @feature.description %div= partial :_back_to_index %script{:type => "text/javascript", :src => "/scripts/shCore.js"} %script{:type => "text/javascript", :src => "/scripts/shBrushGherkin.js"} %script{:type => "text/javascript" } SyntaxHighlighter.all(); Now when I click on the Search link I get a nicely formatted feature file: If you would like see the full source it is available on bitbucket.

    Read the article

  • Getting Started With nServiceBus on VAN Mar 31

    - by van
    Topic: nServiceBus is mature and powerful open source framework that enables to design robust, scalable, message-based, service-oriented architectures. Latest improvements in the configuration API enables developers to quickly get started and build a working simple system that uses messaging infrastructure. The goal of this session is to give a jump start with the framework, introduce basic concepts such as message handlers, Sagas, Pub/Sub, Generic Host and also create a working demo application that uses publish/subscribe messaging. The content of the session is addressed to developers that are interested in learning how to get started using nServiceBus in order to design and build distributed systems. Bio: Bernard Kowalski is currently a Software Developer at Microdesk, one of Autodesk's leading partners in providing variety of Geospatial and Computer-Aided Design solutions. Bernard has experience developing .NET framework-based applications utilizing Windows Forms, Windows Services, ASP.NET MVC, and Web services. In a recent project, Bernard architected and implemented a distributed system based on SOA principles using an open source implementation of an Enterprise Service Bus. Bernard develops software with Agile patterns and practices using Domain Driven Design combined with TDD (Test Driven Development). He is familiar with all of the following APIs: Autodesk Vault/Product Stream API, AutoCAD ActiveX/VBA/.NET API, AutoCAD Mechanical API, Autodesk Inventor API, Autodesk MapGuide Enterprise. Prior to joining Microdesk, Bernard worked as a researcher and teacher at the University of Science and Technology in Krakow, Poland where he was awarded with a PhD in Computer Methods in Materials Science. He also participated in research projects where he developed applications for analysis of hot compression test results using advanced optimization techniques. He also developed Finite Element Method-based programs for thermal and stress analysis using C++ and FORTRAN. Bernard is a member of the Domain Driven Design and ALT.NET user groups in NYC. Virtual ALT.NET (VAN) is the online gathering place of the ALT.NET community. Through conversations, presentations, pair programming and dojos, we strive to improve, explore, and challenge the way we create software. Using net conferencing technology such as Skype and LiveMeeting, we hold regular meetings, open to anyone, usually taking the form of a presentation or an Open Space Technology-style conversation. Please see the Calendar(http://www.virtualaltnet.com/Home/Calendar) to find a VAN group that meets at a time convenient to you, and feel welcome to join a meeting. Past sessions can be found on the Recording page. To stay informed about VAN activities, you can subscribe to the Virtual ALT.NET Google Group and follow the Virtual ALT.NET blog. Times below are Central Standard Time Start Time: Wed, Mar 31, 2010 8:00 PM UTC/GMT -5 hours End Time: Wed, Mar 31, 2010 10:00 PM UTC/GMT -5 hours Attendee URL: http://www.virtualaltnet.com/van Zach Young http://www.virtualaltnet.com

    Read the article

  • Oracle Extends Life Sciences Edition in New Release

    - by charles.knapp
    By Chris Kanaracus, IDG News Service Oracle (ORCL) announced the 17th version of its on-demand CRM (customer relationship management) application Wednesday and made a fresh push into pharmaceutical sales with a Life Sciences edition of the software. New features in CRM on Demand Release 17 include tools for managing sales pipelines and performing forecasts of future business; a redesigned user interface; and added language support. But one CRM industry observer flagged the Life Sciences product as a particular point of interest. Read the full article here.

    Read the article

  • I'm a happy camper

    - by Paul Nielsen
    I’m satisfied with SQL Server 2008. It meets my needs and I can build whatever I want in the database. There’s no feature that it lacks that blocking my development. SQL Server 2008 is what I see when I close my eyes and dream. Maybe I’m just pleased with my current projects, maybe I’m being dense, maybe I lack imagination, tell me if I’m being stupid, but I feel no compelling need for an upgrade. If Microsoft didn’t ship the new version for 3 or 5 more years, I’d be ok with that. I’m sure there...(read more)

    Read the article

< Previous Page | 761 762 763 764 765 766 767 768 769 770 771 772  | Next Page >