Search Results

Search found 14951 results on 599 pages for 'connect by'.

Page 544/599 | < Previous Page | 540 541 542 543 544 545 546 547 548 549 550 551  | Next Page >

  • iPad Impressions

    - by Aaron Lazenby
    So, I spent some quality time with my new iPad on Saturday. Here are things I like/don't like: -- Don't like that it has to sync with iTunes before you use it: I was traveling and left my laptop at home thinking I'd use this iPad thing instead. But the first thing it asked me to do is connect it to a laptop. Ugh. Had to borrow my mother-in-law's MacBook Pro just to get the iPad rolling. -- Like that magazines and newspapers are forever changed: And I think for the better...it's why I bought this thing in the first place. I spent significant time with The New York Times, The Wall Street Journal, Time Magazine and Popular Science on the iPad. Sliding stories around, jumping from section to section, enlarging images = all excellent experiences. Actually prefer iPad magazine to print, which will require a major shift in editorial strategy, summed up by Popular Science's Mark Jannot in his editor's note "What defines a magazine? Curated expertise--not paper." -- Don't like the screwy human factors: I actually enjoy the virtual keyboard (although I think I'm in the minority), but you have to hunch over to look down at what you're typing. Bad technology ergonomics have already jacked my body in various ways. The iPad just introduced a new one.-- Like the multitouch: In fact, it's awesome. Hands down. Probably will have the most lasting impact on the personal computing industry as a whole.   -- Don't like that it's heavy: If you plan to read in bed, you'd better double up on the creatine and curls. Holding this thing up on your own gets pretty uncomfortable. -- Like the Netfilx app: I wanted to watch "The Big Lebowski," so I did. That is all. -- Don't like that people feel 3G is necessary: For $30 a month? Please. I'm already accustomed to limiting my laptop internet use to readily available free wi-fi. Why do I expect anything different with the iPad? Most anyplace I have time to sit and read/use a computer (cafe, airport, you house, library, etc.) has free wi-fi. I can live without web surfing in your car. That's what the iPhone is for. -- Don't like that not everyone was ready in day one: I'm looking at you Facebook. No iPad app for launch? Lame. iPhone apps scaled-up to work on the iPad look grainy and cheap. Not a quality befitting this beautiful $700 piece of glass.Verdict: I'm bringing it to COLLABORATE 08 and seeing if I can go the whole week using only the iPad. If I can trade this thing for my laptop, I know it's a winner. For now, I'm enjoying Popular Science.

    Read the article

  • Boost Netbook Speed with an SD Card & ReadyBoost

    - by Matthew Guay
    Looking for a way to increase the performance of your netbook?  Here’s how you can use a standard SD memory card or a USB flash drive to boost performance with ReadyBoost. Most netbooks ship with 1Gb of Ram, and many older netbooks shipped with even less.  Even if you want to add more ram, often they can only be upgraded to a max of 2GB.  With ReadyBoost in Windows 7, it’s easy to boost your system’s performance with flash memory.  If your netbook has an SD card slot, you can insert a memory card into it and just leave it there to always boost your netbook’s memory; otherwise, you can use a standard USB flash drive the same way. Also, you can use ReadyBoost on any desktop or laptop; ones with limited memory will see the most performance increase from using it. Please Note:  ReadyBoost requires at least 256Mb of free space on your flash drive, and also requires minimum read/write speeds.  Most modern memory cards or flash drives meet these requirements, but be aware that an old card may not work with it. Using ReadyBoost Insert an SD card into your card reader, or connect a USB flash drive to a USB port on your computer.  Windows will automatically see if your flash memory is ReadyBoost capable, and if so, you can directly choose to speed up your computer with ReadyBoost. The ReadyBoost settings dialog will open when you select this.  Choose “Use this device” and choose how much space you want ReadyBoost to use. Click Ok, and Windows will setup ReadyBoost and start using it to speed up your computer.  It will automatically use ReadyBoost whenever the card is connected to the computer. When you view your SD card or flash drive in Explorer, you will notice a ReadyBoost file the size you chose before.  This will be deleted when you eject your card or flash drive. If you need to remove your drive to use elsewhere, simply eject as normal. Windows will inform you that the drive is currently being used.  Make sure you have closed any programs or files you had open from the drive, and then press Continue to stop ReadyBoost and eject your drive. If you remove the drive without ejecting it, the ReadyBoost file may still remain on the drive.  You can delete this to save space on the drive, and the cache will be recreated when you use ReadyBoost next time. Conclusion Although ReadyBoost may not make your netbook feel like a Core i7 laptop with 6GB of RAM, it will still help performance and make multitasking even easier.  Also, if you have, say, a memory stick and a flash drive, you can use both of them with ReadyBoost for the maximum benefit.  We have even noticed better battery life when multitasking with ReadyBoost, as it lets you use your hard drive less.  SD cards and thumb drives are relatively cheap today, and many of us have several already, so this is a great way to improve netbook performance cheaply. Similar Articles Productive Geek Tips Speed up Your Windows Vista Computer with ReadyBoostSet the Speed Dial as the Opera Startup PageAsk the Readers: What are Your Computer’s Hardware Specs?Understanding Windows Vista Aero Glass RequirementsReplace Google Chrome’s New Tab Page with Speed Dial TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Recycle ! Find That Elusive Icon with FindIcons Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems

    Read the article

  • SQL SERVER – Sends backups to a Network Folder, FTP Server, Dropbox, Google Drive or Amazon S3

    - by pinaldave
    Let me tell you about one of the most useful SQL tools that every DBA should use – it is SQLBackupAndFTP. I have been using this tool since 2009 – and it is the first program I install on a SQL server. Download a free version, 1 minute configuration and your daily backups are safe in the cloud. In summary, SQLBackupAndFTP Creates SQL Server database and file backups on schedule Compresses and encrypts the backups Sends backups to a network folder, FTP Server, Dropbox, Google Drive or Amazon S3 Sends email notifications of job’s success or failure SQLBackupAndFTP comes in Free and Paid versions (starting from $29) – see version comparison. Free version is fully functional for unlimited ad hoc backups or for scheduled backups of up to two databases – it will be sufficient for many small customers. What has impressed me from the beginning – is that I understood how it works and was able to configure the job from a single form (see Image 1 – Main form above) Connect to you SQL server and select databases to be backed up Click “Add backup destination” to configure where backups should go to (network, FTP Server, Dropbox, Google Drive or Amazon S3) Enter your email to receive email confirmations Set the time to start daily full backups (or go to Settings if you need Differential or  Transaction Log backups on a flexible schedule) Press “Run Now” button to test You can get to this form if you click “Settings” buttons in the “Schedule section”. Select what types of backups and how often you want to run them and you will see the scheduled backups in the “Estimated backup plan” list A detailed tutorial is available on the developer’s website. Along with SQLBackupAndFTP setup gives you the option to install “One-Click SQL Restore” (you can install it stand-alone too) – a basic tool for restoring just Full backups. However basic, you can drag-and-drop on it the zip file created by SQLBackupAndFTP, it unzips the BAK file if necessary, connects to the SQL server on the start, selects the right database, it is smart enough to restart the server to drop open connections if necessary – very handy for developers who need to restore databases often. You may ask why is this tool is better than maintenance tasks available in SQL Server? While maintenance tasks are easy to set up, SQLBackupAndFTP is still way easier and integrates solution for compression, encryption, FTP, cloud storage and email which make it superior to maintenance tasks in every aspect. On a flip side SQLBackupAndFTP is not the fanciest tool to manage backups or check their health. It only works reliably on local SQL Server instances. In other words it has to be installed on the SQL server itself. For remote servers it uses scripting which is less reliable. This limitations is actually inherent in SQL server itself as BACKUP DATABASE command  creates backup not on the client, but on the server itself. This tool is compatible with almost all the known SQL Server versions. It works with SQL Server 2008 (all versions) and many of the previous versions. It is especially useful for SQL Server Express 2005 and SQL Server Express 2008, as they lack built in tools for backup. I strongly recommend this tool to all the DBAs. They must absolutely try it as it is free and does exactly what it promises. You can download your free copy of the tool from here. Please share your experience about using this tool. I am eager to receive your feedback regarding this article. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, SQLServer, T SQL, Technology

    Read the article

  • Software index broken

    - by Arvind Gangwar
    When I was installing MTS Mblaz crossplatformui.deb for MTS data connect, its installed partial and shows error, and So I tried to uninstall "crossplatformui" but every time it showed following error. installArchives() failed: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 205769 files and directories currently installed.) Removing crossplatformui ... ztemtvcdromd: no process found dpkg: error processing crossplatformui (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: crossplatformui Setting up firmware-b43-installer (4.150.10.5-5) ... --2012-06-01 14:11:21-- http://mirror2.openwrt.org/sources/broadcom-wl-4.150.10.5.tar.bz2 Resolving mirror2.openwrt.org... 46.4.11.11 Connecting to mirror2.openwrt.org|46.4.11.11|:80... failed: Connection refused. dpkg: error processing firmware-b43-installer (--configure): subprocess installed post-installation script returned error exit status 4

    Read the article

  • From NaN to Infinity...and Beyond!

    - by Tony Davis
    It is hard to believe that it was once possible to corrupt a SQL Server Database by storing perfectly normal data values into a table; but it is true. In SQL Server 2000 and before, one could inadvertently load invalid data values into certain data types via RPC calls or bulk insert methods rather than DML. In the particular case of the FLOAT data type, this meant that common 'special values' for this type, namely NaN (not-a-number) and +/- infinity, could be quite happily plugged into the database from an application and stored as 'out-of-range' values. This was like a time-bomb. When one then tried to query this data; the values were unsupported and so data pages containing them were flagged as being corrupt. Any query that needed to read a column containing the special value could fail or return unpredictable results. Microsoft even had to issue a hotfix to deal with failures in the automatic recovery process, caused by the presence of these NaN values, which rendered the whole database inaccessible! This problem is history for those of us on more current versions of SQL Server, but its ghost still haunts us. Recently, for example, a developer on Red Gate’s SQL Response team reported a strange problem when attempting to load historical monitoring data into a SQL Server 2005 database via the C# ADO.NET provider. The ratios used in some of their reporting calculations occasionally threw out NaN or infinity values, and the subsequent attempts to load these values resulted in a nasty error. It turns out to be a different manifestation of the same problem. SQL Server 2005 still does not fully support the IEEE 754 standard for floating point numbers, in that the FLOAT data type still cannot handle NaN or infinity values. Instead, they just added validation checks that prevent the 'invalid' values from being loaded in the first place. For people migrating from SQL Server 2000 databases that contained out-of-range FLOAT (or DATETIME etc.) data, to SQL Server 2005, Microsoft have added to the latter's version of the DBCC CHECKDB (or CHECKTABLE) command a DATA_PURITY clause. When enabled, this will seek out the corrupt data, but won’t fix it. You have to do this yourself in what can often be a slow, painful manual process. Our development team, after a quizzical shrug of the shoulders, simply decided to represent NaN and infinity values as NULL, and move on, accepting the minor inconvenience of not being able to tell them apart. However, what of scientific, engineering and other applications that really would like the luxury of being able to both store and access these perfectly-reasonable floating point data values? The sticking point seems to be the stipulation in the IEEE 754 standard that, when NaN is compared to any other value including itself, the answer is "unequal" (i.e. FALSE). This is clearly different from normal number comparisons and has repercussions for such things as indexing operations. Even so, this hardly applies to infinity values, which are single definite values. In fact, there is some encouraging talk in the Connect note on this issue that they might be supported 'in the SQL Server 2008 timeframe'. If didn't happen; SQL 2008 doesn't support NaN or infinity values, though one could be forgiven for thinking otherwise, based on the MSDN documentation for the FLOAT type, which states that "The behavior of float and real follows the IEEE 754 specification on approximate numeric data types". However, the truth is revealed in the XPath documentation, which states that "…float (53) is not exactly IEEE 754. For example, neither NaN (Not-a-Number) nor infinity is used…". Is it really so hard to fix this problem the right way, and properly support in SQL Server the IEEE 754 standard for the floating point data type, NaNs, infinities and all? Oracle seems to have managed it quite nicely with its BINARY_FLOAT and BINARY_DOUBLE types, so it is technically possible. We have an enterprise-class database that is marketed as being part of an 'integrated' Windows platform. Absurdly, we have .NET and XPath libraries that fully support the standard for floating point numbers, and we can't even properly store these values, let alone query them, in the SQL Server database! Cheers, Tony.

    Read the article

  • help: cannot make ubuntu 64-bit v12.04 install work

    - by honestann
    I decided it was time to update my ubuntu (single boot) computer from 64-bit v10.04 to 64-bit v12.04. Unfortunately, for some reason (or reasons) I just can't make it work. Note that I am attempting a fresh install of 64-bit v12.04 onto a new 3TB hard disk, not an upgrade of the 1TB hard disk that has contained my 64-bit v10.04 installation. To perform the attempted install of v12.04 I unplug the SATA cable from the 1TB drive and plug it into the 3TB drive (to avoid risking damage to my working v10.04 installation). I downloaded the ubuntu 64-bit v12.04 install DVD ISO file (~1.6 GB) from the ubuntu releases webpage and burned it onto a DVD. I have downloaded the DVD ISO file 3 times and burned 3 of these installation DVDs (twice with v10.04 and once with my winxp64 system), but none of them work. I run the "check disk" on the DVDs at the beginning of the installation process to assure the DVD is valid. I also tried to install on two older 250GB seagate drives in the same computer. During every attempt I plug the same SATA cable (sda) into only one disk drive (the 3TB or one of the 250GB drives) and leave the other disk drives unconnected (for simplicity). Installation takes about 30 minutes on the 250GB drives, and about 60 minutes on the 3TB drive - not sure why. When I install on the 250GB drives, the install process finishes, the computer reboots (after the install DVD is removed), but I get a grub error 15. It is my understanding that 64-bit ubuntu (and 64-bit linux in general) has no problem with 3TB disk drives. In the BIOS I have tried having EFI set to "enabled" and "auto" with no apparent difference (no success). I have tried partitioning the drive in a few ways to see if that makes a difference, but so far it has not mattered. Typically I manually create partitions something like this: 8GB swap 8GB /boot ext4 3TB / ext4 But I've also tried the following, just in case it matters: 100MB boot efi 8GB swap 8GB /boot ext4 3TB / ext4 Note: In the partition dialog I specify bootup on the same drive I am partitioning and installing ubuntu v12.04 onto. It is a VERY DANGEROUS FACT that the default for this always comes up with the wrong drive (some other drive, generally the external drive). Unless I'm stupid or misunderstanding something, this is very wrong and very dangerous default behavior. Note: If I connect the SATA cable to the 1TB drive that has been my ubuntu 64-bit v10.04 system drive for the past 2 years, it boots up and runs fine. I guess there must be a log file somewhere, and maybe it gives some hints as to what the problem is. I should be able to boot off the 1TB drive with the 3TB drive connected as a secondary (non-boot) drive and get the log file, assuming there is one and someone tells me the name (and where to find it if the name is very generic). After installation on the 3TB drive completes and the system reboots, the following prints out on a black screen: Loading Operating System ... Boot from CD/DVD : Boot from CD/DVD : error: unknown filesystem grub rescue Note: I have two DVD burners in the system, hence the duplicate line above. The same install and reboot on the 250GB drives generates "grub error 15". Sigh. Any ideas? ========== motherboard == gigabyte 990FXA-UD7 CPU == AMD FX-8150 8-core bulldozer @ 3.6 GHz RAM == 8GB of DDR3 in 2 sticks (matched pair) HDD == seagate 3TB SATA3 @ 7200 rpm (new install 64-bit v12.04) HDD == seagate 1TB SATA3 @ 7200 rpm (current install 64-bit v10.04) GPU == nvidia GTX-285 ??? == no overclocking or other funky business USB == external seagate 2TB HDD for making backups DVD == one bluray burner (SATA) DVD == one DVD burner (SATA) The current ubuntu 64-bit v10.04 system boots and runs fine on a seagate 1TB.

    Read the article

  • Odd MVC 4 Beta Razor Designer Issue

    - by Rick Strahl
    This post is a small cry for help along with an explanation of a problem that is hard to describe on twitter or even a connect bug and written in hopes somebody has seen this before and any ideas on what might cause this. Lots of helpful people had comments on Twitter for me, but they all assumed that the code doesn't run, which is not the case - it's a designer issue. A few days ago I started getting some odd problems in my MVC 4 designer for an app I've been working on for the past 2 weeks. Basically the MVC 4 Razor designer keeps popping me errors, about the call signature to various Html Helper methods being incorrect. It also complains about the ViewBag object and not supporting dynamic requesting to load assemblies into the project. Here's what the designer errors look like: You can see the red error underlines under the ViewBag and an Html Helper I plopped in at the top to demonstrate the behavior. Basically any HtmlHelper I'm accessing is showing the same errors. Note that the code *runs just fine* - it's just the designer that is complaining with Errors. What's odd about this is that *I think* this started only a few days ago and nothing consequential that I can think of has happened to the project or overall installations. These errors aren't critical since the code runs but pretty annoying especially if you're building and have .csHtml files open in Visual Studio mixing these fake errors with real compiler errors. What I've checked Looking at the errors it indeed looks like certain references are missing. I can't make sense of the Html Helpers error, but certainly the ViewBag dynamic error looks like System.Core or Microsoft.CSharp assemblies are missing. Rest assured they are there and the code DOES run just fine at runtime. This is a designer issue only. I went ahead and checked the namespaces that MVC has access to in Razor which lives in the Views folder's web.config file: /Views/web.config For good measure I added <system.web.webPages.razor> <host factoryType="System.Web.Mvc.MvcWebRazorHostFactory, System.Web.Mvc, <split for layout> Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <pages pageBaseType="System.Web.Mvc.WebViewPage"> <namespaces> <add namespace="System.Web.Mvc" /> <add namespace="System.Web.Mvc.Ajax" /> <add namespace="System.Web.Mvc.Html" /> <add namespace="System.Web.Routing" /> <add namespace="System.Linq" /> <add namespace="System.Linq.Expressions" /> <add namespace="ClassifiedsBusiness" /> <add namespace="ClassifiedsWeb"/> <add namespace="Westwind.Utilities" /> <add namespace="Westwind.Web" /> <add namespace="Westwind.Web.Mvc" /> </namespaces> </pages> </system.web.webPages.razor> For good measure I added System.Linq and System.Linq.Expression on which some of the Html.xxxxFor() methods rely, but no luck. So, has anybody seen this before? Any ideas on what might be causing these issues only at design time rather, when the final compiled code runs just fine?© Rick Strahl, West Wind Technologies, 2005-2012Posted in Razor  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Java Spotlight Episode 103: 2012 Duke Choice Award Winners

    - by Roger Brinkley
    Our annual interview with the 2012 Duke Choice Award Winners recorded live at the JavaOne 2012. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes Events Oct 13, Devoxx 4 Kids Nederlands Oct 15-17, JAX London Oct 20, Devoxx 4 Kids Français Oct 22-23, Freescale Technology Forum - Japan, Tokyo Oct 30-Nov 1, Arm TechCon, Santa Clara Oct 31, JFall, Netherlands Nov 2-3, JMagreb, Morocco Nov 13-17, Devoxx, Belgium Feature Interview Duke Choice Award Winners 2012 - Show Presentation London Java CommunityThe second user group receiving a Duke’s Choice Award this year, the London Java Community (LJC) and its users have been active in the OpenJDK, the Java Community Process (JCP) and other efforts within the global Java community. Student Nokia Developer GroupThis year’s student winner, Ram Kashyap, is the founder and president of the Nokia Student Network, and was profiled in the “The New Java Developers” feature in the March/April 2012 issue of Java Magazine. Since then, Ram has maintained a hectic pace, graduating from the People’s Education Society Institute of Technology in Bangalore, India, while working on a Java mobile startup and training students on Java ME. Jelastic, Inc.Moving existing Java applications to the cloud can be a daunting task, but startup Jelastic, Inc. offers the first all-Java platform-as-a-service (PaaS) that enables existing Java applications to be deployed in the cloud without code changes or lock-in. NATOThe first-ever Community Choice Award goes to the MASE Integrated Console Environment (MICE) in use at NATO. Built in Java on the NetBeans platform, MICE provides a high-performance visualization environment for conducting air defense and battle-space operations. DuchessRather than focus on a specific geographic area like most Java User Groups (JUGs), Duchess fosters the participation of women in the Java community worldwide. The group has more than 500 members in 60 countries, and provides a platform through which women can connect with each other and get involved in all aspects of the Java community. AgroSense ProjectImproving farming methods to feed a hungry world is the goal of AgroSense, an open source farm information management system built in Java and the NetBeans platform. AgroSense enables farmers, agribusinesses, suppliers and others to develop modular applications that will easily exchange information through a common underlying NetBeans framework. Apache Software Foundation Hadoop ProjectThe Apache Software Foundation’s Hadoop project, written in Java, provides a framework for distributed processing of big data sets across clusters of computers, ranging from a few servers to thousands of machines. This harnessing of large data pools allows organizations to better understand and improve their business. Parleys.comE-learning specialist Parleys.com, based in Brussels, Belgium, uses Java technologies to bring online classes and full IT conferences to desktops, laptops, tablets and mobile devices. Parleys.com has hosted more than 1,700 conferences—including Devoxx and JavaOne—for more than 800,000 unique visitors. Winners not presenting at JavaOne 2012 Duke Choice Awards BOF Liquid RoboticsRobotics – Liquid Robotics is an ocean data services provider whose Wave Glider technology collects information from the world’s oceans for application in government, science and commercial applications. The organization features the “father of Java” James Gosling as its chief software architect.United Nations High Commissioner for RefugeesThe United Nations High Commissioner for Refugees (UNHCR) is on the front lines of crises around the world, from civil wars to natural disasters. To help facilitate its mission of humanitarian relief, the UNHCR has developed a light-client Java application on the NetBeans platform. The Level One registration tool enables the UNHCR to collect information on the number of refugees and their water, food, housing, health, and other needs in the field, and combines that with geocoding information from various sources. This enables the UNHCR to deliver the appropriate kind and amount of assistance where it is needed.

    Read the article

  • SQL SERVER – Discard Results After Query Execution – SSMS

    - by pinaldave
    The first thing I do any day is to turn on the computer. Today I woke up and as soon as I turned on the computer I saw a chat message from a friend. He was a bit confused and wanted me to help him. Just as usual I am keeping the relevant conversation in focus and documenting our conversation as chat. Let us call him Ajit. Ajit: Pinal, every time I run a query there is no result displayed in the SSMS but when I run the query in my application it works and returns an appropriate result. Pinal:  Have you tried with different parameters? Ajit: Same thing. However, it works from another computer when I connect to the same server with the same query parameters? Pinal: What? That is new and I believe it is something to do with SSMS and not with the server. Send me screenshot please. Ajit: I believe so, let me send you a screenshot, Pinal: (looking at the screenshot) Oh man, there is no result-tab at all. Ajit: That is what the problem is. It does not have the tab which displays the result. This works just fine from another computer. Pinal: Have you referred Nakul’s blog post – SSMS – Query result options – Discard result after query executes, that talks about setting which can discard the query results after execution. (After a while) Ajit: I think it seems like on the computer where I am running the query my SSMS seems to have the option enabled related to discarding results. I fixed it by following Nakul’s blog post. Pinal: Great! Quite often I get the question what is the importance of the feature. Let us first see how to turn on or turn off this feature in SQL Server Management Studio 2012. In SSMS 2012 go to Tools >> Options >> Query Results > SQL Server >> Results to Grid >> Discard Results After Query Execution. When enabled this option will discard results after the execution. The advantage of disabling the option is that it will improve the performance by using less memory. However the real question is why would someone enable or disable the option. What are the cases when someone wants to run the query but do not care about the result? Matter of the fact, it does not make sense at all to run query and not care about the result. The matter of the fact, I can see quite a few reasons for using this option. I often enable this option when I am doing performance tuning exercise. During performance tuning exercise when I am working with execution plans and do not need results to verify every time or when I am tuning Indexes and its effect on execution plan I do not need the results. In this kind of situations I do keep this option on and discard the results. It always helps me big time as in most of the performance tuning exercise I am dealing with huge amount of the data and dealing with this data can be expensive. Nakul’s has done the experiment here already but I am going to repeat the same again using AdventureWorks Database. Run following T-SQL Script with and without enabling the option to discard the results. USE AdventureWorks2012 GO SELECT * FROM Sales.SalesOrderDetail GO 10 After enabling Discard Results After Query Execution After disabling Discard Results After Query Execution Well, this is indeed a good option when someone is debugging the execution plan or does not want the result to be displayed. Please note that this option does not reduce IO or CPU usage for SQL Server. It just discards the results after execution and a good help for debugging on the development server. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Installing SQL Server Data Tools and SSRS

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. If you have installed SQL Server, but are missing the Data Tools or Reporting Services Double-click the SQL Server 2012 installation media. Click the Installation link on the left to view the Installation options. Click the top link New SQL Server stand-alone installation or add features to an existing installation. Follow the SQL Server Setup wizard until you get to the Installation Type screen. At that screen, select Add features to an existing instance of SQL Server 2012. Click Next to move to the Feature Selection page. Select Reporting Services – Native and SQL Server Data Tools. If the Management Tools have not been installed, go ahead and choose them as well. Continue through the wizard and reboot the computer at the end of the installation if instructed to do so. Configure Reporting Services If you installed Reporting Services during the installation of the SQL Server instance, SSRS will be configured automatically for you. If you install SSRS later, then you will have to go back and configure it as a subsequent step. Click Start > All Programs > Microsoft SQL Server 2012 > Configuration Tools > Reporting Services Configuration Manager > Connect on the Reporting Services Configuration Connection dialog box. On the left-hand side of the Reporting Services Configuration Manager, click Database. Click the Change Database button on the right side of the screen. Select Create a new report server database and click Next. Click through the rest of the wizard accepting the defaults. This wizard creates two databases: ReportServer, used to store report definitions and security, and ReportServerTempDB which is used as scratch space when preparing reports for user requests. Now click Web Service URL on the left-hand side of the Reporting Services Configuration Manager. Click the Apply button to accept the defaults. If the Apply button has been grayed out, move on to the next step. This step sets up the SSRS web service. The web service is the program that runs in the background that communicates between the web page, which you will set up next, and the databases. The final configuration step is to select the Report Manager URL link on the left. Accept the default settings and click Apply. If the Apply button was already grayed out, this means the SSRS was already configured. This step sets up the Report Manager web site where you will publish reports. You may be wondering if you also must install a web server on your computer. SQL Server does not require that the Internet Information Server (IIS), the Microsoft web server, be installed to run Report Manager. Click Exit to dismiss the Reporting Services Configuration Manager dialog box. Tomorrow’s Post Tomorrow’s blog post will show how to create your first report using the Report Wizard. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • Migrating SQL Server Compact Edition (SQL CE) database to SQL Server using Web Matrix

    - by Harish Ranganathan
    One of the things that is keeping us busy is the Web Camps we are delivering across 5 cities.  If you are a reader of this blog, and also attended one of these web camps, there is a good chance that you have seen me since I was there in all the places, so far.  The topics that we cover include Visual Studio 2010 SP1, SQL CE, ASP.NET MVC & HTML5.  Whenever I talk about SQL CE, the immediate response is that, people are wow that Microsoft has shipped a FREE compact edition database, which is an embedded database that can be x-copy deployed.  If you think, well didn’t Microsoft ship SQL Express which is FREE?  The difference is that, SQL Express runs as a service in the machine (if you open SQL Configuration Manager, you can notice that SQL Express is running as a service along with your SQL Server Engine (if you have installed ).  This makes it that, even if you are willing to use SQL Express when you deploy your application, it needs to be installed on the production machine (hosting provider) and it needs to run as a service.  Many hosters don’t allow such services to run on their space. SQL CE comes as a x-Copy deploy-able database with just a few DLLs required to run it on the machine and they don’t even need to be installed in GAC on the production machine.  In fact, if you have Visual Studio 2010 SP1 installed, you can use the “Add Deployable Dependencies” option in Project-Properties and it would detect that SQL CE is something you would probably want to add as a deploy-able dependency for your project.  With that, it bundles the required DLLs as a part of the “_bin_deployableAssemblies” folder.  So your project can be x-Copy deployed and just works fine. However, SQL CE has the limit of 4GB storage space.  Real world applications often require more than just 4GB of data storage and it often turns out that people would like to use SQL CE for development/ramp up stages but would like to migrate to full fledged SQL Server after a while.  So, its only natural that the question arises “How do I move my SQL CE database to SQL Server”  And honestly, it doesn’t come across as a straight forward support.  I was talking to Ambrish Mishra (PM in SQL CE Team, Hyderabad) since I got this question in almost all the places where we talked about SQL CE.   He was kind enough to demonstrate how this can be accomplished using Web Matrix.  Open Web Matrix (Web Matrix can be installed for free from www.microsoft.com/web) and click on “Site from Template” Click on the “Bakery” template (since by default it uses a SQL CE database and has all the required sample data) and click “Ok”. In the project, you can navigate to the Database tab and will be able to find that the Bakery site uses a SQL CE database “bakery.sdf” Select the “bakery.sdf” and you will be able to see the “Migrate” button on the top right Once you click on the “Migrate” button, you will notice that the popup wizard opens up and by default is configured for SQL Express.  You can edit the same to point to your local SQL Server instance, or a remote server. Upon filling in the Server Name, Username and Password, when you click “Ok”, couple of things happen.  1. The database is migrated to SQL Server (local or remote – subject to permissions on remote server).   You can open up SQL Server Management Studio and connect to the server to verify that the “bakery” database exists under “Databases” node. 2. You can also notice that in Web Matrix, when you navigate to the “Files” tab and open up the web.config file, connection string now points to the SQL Server instance (yes, the Migrate button was smart enough to make this change too ) And there it is, your SQL Server Compact Edition database, now migrated to SQL Server!! In a future post, I would explain the steps involved when using Visual Studio. Cheers !!!

    Read the article

  • bluez 5.19 PS4 controller

    - by Athanase
    I currently have a problem when pairing my computer with a PS4 remote. On my Ubuntu 14.04 I removed everything related with bluez and bluetooth, and I built and installed bluez 5.19. Here are some useful command outputs: jean@system ~ hcitool hcitool - HCI Tool ver 5.19 jean@system ~ hcitool dev Devices: hci0 00:15:83:4C:0C:BB jean@system ~ bluetoothctl [bluetooth]# version Version 5.19 jean@system ~ bluetoothctl [NEW] Controller 00:15:83:4C:0C:BB BlueZ [default] jean@system ~ lsusb Bus 003 Device 012: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode) So here is what happens. When I try to hard pair the controller with the computer, by holding the share and ps button for a while, everything works as expected and the pairing is done properly. After a hard pairing if I try the pairing by pressing the ps button only, nothings happen. In order to go it, I first power up the bluetooth dongle: jean@system ~ sudo hciconfig hciX up and then I run the bluetooh deamon bluetoothd: jean@system /usr/libexec/bluetooth ~ ./bluetoothd -d -n bluetoothd[11270]: Bluetooth daemon 5.19 bluetoothd[11270]: src/main.c:parse_config() parsing main.conf bluetoothd[11270]: src/main.c:parse_config() discovto=0 bluetoothd[11270]: src/main.c:parse_config() pairto=0 bluetoothd[11270]: src/main.c:parse_config() auto_to=60 bluetoothd[11270]: src/main.c:parse_config() name=%h-%d bluetoothd[11270]: src/main.c:parse_config() class=0x000100 bluetoothd[11270]: src/main.c:parse_config() Key file does not have key 'DeviceID' bluetoothd[11270]: src/gatt.c:gatt_init() Starting GATT server bluetoothd[11270]: src/adapter.c:adapter_init() sending read version command bluetoothd[11270]: Starting SDP server bluetoothd[11270]: src/sdpd-service.c:register_device_id() Adding device id record for 0002:1d6b:0246:0513 ... bluetoothd[11270]: src/adapter.c:adapter_service_insert() /org/bluez/hci0 bluetoothd[11270]: src/adapter.c:add_uuid() sending add uuid command for index 0 bluetoothd[11270]: profiles/audio/a2dp.c:a2dp_sink_server_probe() path /org/bluez/hci0 bluetoothd[11270]: profiles/audio/a2dp.c:a2dp_source_server_probe() path /org/bluez/hci0 bluetoothd[11270]: src/adapter.c:btd_adapter_unblock_address() hci0 00:00:00:00:00:00 bluetoothd[11270]: src/adapter.c:get_ltk_info() A4:15:66:C1:0D:2A bluetoothd[11270]: src/device.c:device_create_from_storage() address A4:15:66:C1:0D:2A bluetoothd[11270]: src/device.c:device_new() address A4:15:66:C1:0D:2A bluetoothd[11270]: src/device.c:device_new() Creating device /org/bluez/hci0/dev_A4_15_66_C1_0D_2A bluetoothd[11270]: src/device.c:device_set_bonded() bluetoothd[11270]: src/adapter.c:get_ltk_info() A4:15:66:88:5E:9A bluetoothd[11270]: src/device.c:device_create_from_storage() address A4:15:66:88:5E:9A bluetoothd[11270]: src/device.c:device_new() address A4:15:66:88:5E:9A bluetoothd[11270]: src/device.c:device_new() Creating device /org/bluez/hci0/dev_A4_15_66_88_5E_9A bluetoothd[11270]: src/device.c:device_set_bonded() bluetoothd[11270]: src/adapter.c:load_link_keys() hci0 keys 2 debug_keys 0 bluetoothd[11270]: src/adapter.c:load_ltks() hci0 keys 0 bluetoothd[11270]: src/adapter.c:load_connections() sending get connections command for index 0 bluetoothd[11270]: src/adapter.c:adapter_service_insert() /org/bluez/hci0 bluetoothd[11270]: src/adapter.c:add_uuid() sending add uuid command for index 0 bluetoothd[11270]: src/adapter.c:set_did() hci0 source 2 vendor 1d6b product 246 version 513 bluetoothd[11270]: src/adapter.c:adapter_register() Adapter /org/bluez/hci0 registered bluetoothd[11270]: src/adapter.c:set_dev_class() sending set device class command for index 0 bluetoothd[11270]: src/adapter.c:set_name() sending set local name command for index 0 bluetoothd[11270]: src/adapter.c:set_mode() sending set mode command for index 0 bluetoothd[11270]: src/adapter.c:set_mode() sending set mode command for index 0 bluetoothd[11270]: src/adapter.c:adapter_start() adapter /org/bluez/hci0 has been enabled bluetoothd[11270]: src/adapter.c:trigger_passive_scanning() bluetoothd[11270]: plugins/hostname.c:property_changed() static hostname: system bluetoothd[11270]: plugins/hostname.c:property_changed() pretty hostname: bluetoothd[11270]: plugins/hostname.c:update_name() name: system bluetoothd[11270]: src/adapter.c:adapter_set_name() name: system bluetoothd[11270]: plugins/hostname.c:property_changed() chassis: desktop bluetoothd[11270]: plugins/hostname.c:update_class() major: 0x01 minor: 0x01 bluetoothd[11270]: src/adapter.c:load_link_keys_complete() link keys loaded for hci0 bluetoothd[11270]: src/adapter.c:load_ltks_complete() LTKs loaded for hci0 bluetoothd[11270]: src/adapter.c:get_connections_complete() Connection count: 0 And then I press the ps button of the PS4 controller bluetoothd[11270]: src/adapter.c:connected_callback() hci0 device A4:15:66:C1:0D:2A connected eir_len 5 bluetoothd[11270]: profiles/input/server.c:connect_event_cb() Incoming connection from A4:15:66:C1:0D:2A on PSM 17 bluetoothd[11270]: profiles/input/device.c:input_device_set_channel() idev (nil) psm 17 bluetoothd[11270]: Refusing input device connect: No such file or directory (2) bluetoothd[11270]: profiles/input/server.c:confirm_event_cb() bluetoothd[11270]: Refusing connection from A4:15:66:C1:0D:2A: unknown device bluetoothd[11270]: src/adapter.c:dev_disconnected() Device A4:15:66:C1:0D:2A disconnected, reason 3 bluetoothd[11270]: src/adapter.c:adapter_remove_connection() bluetoothd[11270]: plugins/policy.c:disconnect_cb() reason 3 bluetoothd[11270]: src/adapter.c:bonding_attempt_complete() hci0 bdaddr A4:15:66:C1:0D:2A type 0 status 0xe bluetoothd[11270]: src/device.c:device_bonding_complete() bonding (nil) status 0x0e bluetoothd[11270]: src/device.c:device_bonding_failed() status 14 bluetoothd[11270]: src/adapter.c:resume_discovery() So I don't know what is happening here and a bit of help would be appreciated.

    Read the article

  • SQLSaturday 33 Observations

    - by Geoff N. Hiten
    Along with a lot of my colleagues, I went to SQLSaturday #33 in Charlotte this last weekend.  Overall a really good event, especially for a first-time organizer.  There is some controversy over certain events where my name got mentioned so I thought I would clear the air. Before I get to the core controversy, let's get the details out of the way.  The Microsoft Offices in Charlotte were an excellent venue for this event.  I really appreciated the Microsoft employees that helped out by letting us in and out of normally secure areas.  This is definitely above and beyond on their part. Thanks to the organizers (especially Greg and Peter) for the great hospitality they showed to the speakers.  Now for the specifics.  Like most events of this type, there was a raffle at the end for some cool swag.  As a speaker I got raffle tickets just like any other attendee.  The raffle was clearly promoted as "must be present to win".  The problem is that for various reasons, the raffle kicked off immediately after the last speaker finished in the largest room.  That room was across the parking lot from all the other rooms for the event.  I happened to have one of the last sessions of the day, and not in the main room.  I also ran long since the audience was very interactive and there were a lot of follow-up questions.  (BTW, thanks to everyone who came and stayed for my session.  Sorry it cost you the chance to win too.).  My name was drawn for an very nice piece of swag (iPod Touch if you insist).  Since I wasn't there, I didn't win. Several folks mentioned I was still speaking and was "here" (as in at the event) just not "here in the room". Yes, I was mad when I found out about it. I think that was handled poorly.  I personally lost out as did my audience (dunno if anyone specific lost anything, but it is the idea that counts).  It was a mistake. Mistakes happen.  Nobody acted maliciously.  Heck, the guys running the event who made the decision are my friends and remain so.  I got over my mad.  We talked about this privately and we are all OK with what happened.  I am not going to let a gadget get in the way of a couple of good friendships. I think the mistake was mostly due to a lack of unity between the venue buildings   Pam Shaw had a similar challenge in Tampa a few weeks ago, including a speaker who ran long on the last session (not me that time).  She had a couple of teenage volunteers to act as gofers/runners.  They counted heads in sessions, pointed people to last-minute room and session changes, and generally helped connect the organizers to what was actually happening.  Note that this was not Pam's first SQLSaturday event.  She knew but the knowledge had not been institutionalized.  We (The SQL community in general and SQLSaturday organizers in particular) now know how essential gofers are to success. I know I spent most of this post focusing on the controversy, but I wanted to clear everything up.  I don't want to let a minor mistake, made in good faith, overshadow what was a tremendously good event for the community. As for the iPod Touch, someone in the SQL community is enjoying it, so it is not a total loss.  And if losing out on it is the price I pay so we can learn this, then that is what a community leader does.  Consider it a gift.  Besides, I really wanted a Zune 120 :)

    Read the article

  • Determining Maximum Txpower a WiFi Card Supports?

    - by BigGenius
    I have a Atheros R9285 wifi card. How can i determine , what is max. Txpower it can support? biggenius@hackbook:~$ iwconfig lo no wireless extensions. wlan0 IEEE 802.11abgn ESSID:"Default" Mode:Managed Frequency:2.437 GHz Access Point: 00:08:5C:9D:4F:40 Bit Rate=2 Mb/s Tx-Power=35 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:on Link Quality=24/70 Signal level=-86 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:140 Invalid misc:247 Missed beacon:0 eth0 no wireless extensions. biggenius@hackbook:~$ iw phy0 info Wiphy phy0 Band 1: Capabilities: 0x11ce HT20/HT40 SM Power Save disabled RX HT40 SGI TX STBC RX STBC 1-stream Max AMSDU length: 3839 bytes DSSS/CCK HT40 Maximum RX AMPDU length 65535 bytes (exponent: 0x003) Minimum RX AMPDU time spacing: 8 usec (0x06) HT TX/RX MCS rate indexes supported: 0-7 Frequencies: * 2412 MHz [1] (35.0 dBm) * 2417 MHz [2] (35.0 dBm) * 2422 MHz [3] (35.0 dBm) * 2427 MHz [4] (35.0 dBm) * 2432 MHz [5] (35.0 dBm) * 2437 MHz [6] (35.0 dBm) * 2442 MHz [7] (35.0 dBm) * 2447 MHz [8] (35.0 dBm) * 2452 MHz [9] (35.0 dBm) * 2457 MHz [10] (35.0 dBm) * 2462 MHz [11] (35.0 dBm) * 2467 MHz [12] (35.0 dBm) * 2472 MHz [13] (35.0 dBm) * 2484 MHz [14] (35.0 dBm) Bitrates (non-HT): * 1.0 Mbps * 2.0 Mbps (short preamble supported) * 5.5 Mbps (short preamble supported) * 11.0 Mbps (short preamble supported) * 6.0 Mbps * 9.0 Mbps * 12.0 Mbps * 18.0 Mbps * 24.0 Mbps * 36.0 Mbps * 48.0 Mbps * 54.0 Mbps Band 2: Capabilities: 0x11ce HT20/HT40 SM Power Save disabled RX HT40 SGI TX STBC RX STBC 1-stream Max AMSDU length: 3839 bytes DSSS/CCK HT40 Maximum RX AMPDU length 65535 bytes (exponent: 0x003) Minimum RX AMPDU time spacing: 8 usec (0x06) HT TX/RX MCS rate indexes supported: 0-7 Frequencies: * 5180 MHz [36] (35.0 dBm) * 5200 MHz [40] (35.0 dBm) * 5220 MHz [44] (35.0 dBm) * 5240 MHz [48] (35.0 dBm) * 5260 MHz [52] (35.0 dBm) (passive scanning, no IBSS, radar detection) * 5280 MHz [56] (35.0 dBm) (passive scanning, no IBSS, radar detection) * 5300 MHz [60] (35.0 dBm) (passive scanning, no IBSS, radar detection) * 5320 MHz [64] (35.0 dBm) (passive scanning, no IBSS, radar detection) * 5500 MHz [100] (35.0 dBm) (passive scanning, no IBSS, radar detection) * 5520 MHz [104] (35.0 dBm) (passive scanning, no IBSS, radar detection) * 5540 MHz [108] (35.0 dBm) (passive scanning, no IBSS, radar detection) * 5560 MHz [112] (35.0 dBm) (passive scanning, no IBSS, radar detection) * 5580 MHz [116] (35.0 dBm) (passive scanning, no IBSS, radar detection) * 5600 MHz [120] (35.0 dBm) (passive scanning, no IBSS, radar detection) * 5620 MHz [124] (35.0 dBm) (passive scanning, no IBSS, radar detection) * 5640 MHz [128] (35.0 dBm) (passive scanning, no IBSS, radar detection) * 5660 MHz [132] (35.0 dBm) (passive scanning, no IBSS, radar detection) * 5680 MHz [136] (35.0 dBm) (passive scanning, no IBSS, radar detection) * 5700 MHz [140] (35.0 dBm) (passive scanning, no IBSS, radar detection) * 5745 MHz [149] (35.0 dBm) * 5765 MHz [153] (35.0 dBm) * 5785 MHz [157] (35.0 dBm) * 5805 MHz [161] (35.0 dBm) * 5825 MHz [165] (35.0 dBm) Bitrates (non-HT): * 6.0 Mbps * 9.0 Mbps * 12.0 Mbps * 18.0 Mbps * 24.0 Mbps * 36.0 Mbps * 48.0 Mbps * 54.0 Mbps max # scan SSIDs: 4 max scan IEs length: 2257 bytes Coverage class: 0 (up to 0m) Supported Ciphers: * WEP40 (00-0f-ac:1) * WEP104 (00-0f-ac:5) * TKIP (00-0f-ac:2) * CCMP (00-0f-ac:4) * CMAC (00-0f-ac:6) Available Antennas: TX 0x1 RX 0x3 Configured Antennas: TX 0x1 RX 0x3 Supported interface modes: * IBSS * managed * AP * AP/VLAN * WDS * monitor * mesh point * P2P-client * P2P-GO software interface modes (can always be added): * AP/VLAN * monitor valid interface combinations: * #{ managed, WDS, P2P-client } <= 2048, #{ AP, mesh point, P2P-GO } <= 8, total <= 2048, #channels <= 1 Supported commands: * new_interface * set_interface * new_key * new_beacon * new_station * new_mpath * set_mesh_params * set_bss * authenticate * associate * deauthenticate * disassociate * join_ibss * join_mesh * remain_on_channel * set_tx_bitrate_mask * action * frame_wait_cancel * set_wiphy_netns * set_channel * set_wds_peer * Unknown command (82) * Unknown command (81) * Unknown command (84) * Unknown command (87) * Unknown command (85) * testmode * connect * disconnect Supported TX frame types: * IBSS: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 * managed: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 * AP: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 * AP/VLAN: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 * mesh point: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 * P2P-client: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 * P2P-GO: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 Supported RX frame types: * IBSS: 0x00d0 * managed: 0x0040 0x00d0 * AP: 0x0000 0x0020 0x0040 0x00a0 0x00b0 0x00c0 0x00d0 * AP/VLAN: 0x0000 0x0020 0x0040 0x00a0 0x00b0 0x00c0 0x00d0 * mesh point: 0x00b0 0x00c0 0x00d0 * P2P-client: 0x0040 0x00d0 * P2P-GO: 0x0000 0x0020 0x0040 0x00a0 0x00b0 0x00c0 0x00d0 Device supports RSN-IBSS.

    Read the article

  • Silverlight Cream for March 09, 2011 -- #1057

    - by Dave Campbell
    In this Issue: Dennis Doomen, Peter Kuhn, Michael Crump, Joe McBride, Martin Krüger, Jeremy Likness, Manas Patnaik, Jesse Liberty(-2-), WindowsPhoneGeek(-2-). Above the Fold: Silverlight: "A highlighting AutoCompleteBox in Silverlight" Peter Kuhn WP7: "WP7 WatermarkedTextBox custom control" WindowsPhoneGeek Training: "" Shoutouts: Karl Shifflett announced that he and Josh Smith have heard the developers and released a demo: Mole 2010 Demo Released This is a somewhat older post, but the material is good and I was reminded of it while talking to Josh Smith at the MVP summit last week: Advanced MVVM ... money well-spent From SilverlightCream.com: Introducing the Silverlight Cookbook Dennis Doomen unveils a Codeplex site "containing a Silverlight 4 app that includes most of the complexities you might run into" ... I'm tagging this in my WynApse outlookbar... great stuff, Dennis! A highlighting AutoCompleteBox in Silverlight Peter Kuhn took on a task in response to a forum query and created a highlighting AutoCompleteBox, and is giving it to us... this really looks cool, Peter, and great explanation. Taking a look at the Mindscape Phone Elements for WP7. Michael Crump takes a good look at the Mindscape Phone Elements for WP7... and if you read closely you might still be able to get a free license! Windows Phone – “Can’t connect to your phone. Disconnect it, Restart it, then try connecting again.” Joe McBride explains a way out of an issue that many should be seeing as we repave or replace machines... how to get our device recognized on the updated machine... without giving cryptic messages. How to: only with the full visibility of an application in the browser window start an action Martin Krüger continues his journey in starting storyboards and tackles the condition that the application is completely in the browser window prior to the storyboard starting. A Numeric Input Control for Windows Phone 7 Jeremy Likness came up with a great idea for numeric input for WP7 ... you'll smile when you see it, but what a great idea... and a NumericTextBox to go along with it. Performing CRUD on Relational Data (Multiple table) using RIA in SL4 Manas Patnaik has a post up that breaks the normal blog post or demo mold by having two tables with a relational constraint and doing CRUD operations on them. Plenty of diagrams and good information. Select Many: Reactive Extensions’ Mother Of All Operators [Chaining] Jesse Liberty has part 9 in his Rx series up, and is looking at SelectMany this time, and chaining calls. He's using WPF for the sample, but the goodness is all there for us Silverlight guys too. The Full Stack 8–Adding Search to the Phone Client Jesse Liberty and Jon Galloway have part 8 of their Full Stack series up ... this is the MVC3, ASP.NET, Silverlight, and WP7 app development series... this time out they're putting Search in the Phone client. All about ResourceDictionary in WP7 WindowsPhoneGeek is discussing ResourceDictionaries in this post... beginning with What is a ResourceDictionary and continuing out through creating and using one, plus a good comment on merging. WP7 WatermarkedTextBox custom control In his next post, WindowsPhoneGeek walks us through the creation of a WatermarkedTextBox for WP7 right from the derivation from TextBox... very nice tutorial and lots of code/examples. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Implementing Database Settings Using Policy Based Management

    - by Ashish Kumar Mehta
    Introduction Database Administrators have always had a tough time to ensuring that all the SQL Servers administered by them are configured according to the policies and standards of organization. Using SQL Server’s  Policy Based Management feature DBAs can now manage one or more instances of SQL Server 2008 and check for policy compliance issues. In this article we will utilize Policy Based Management (aka Declarative Management Framework or DMF) feature of SQL Server to implement and verify database settings on all production databases. It is best practice to enforce the below settings on each Production database. However, it can be tedious to go through each database and then check whether the below database settings are implemented across databases. In this article I will explain it to you how to utilize the Policy Based Management Feature of SQL Server 2008 to create a policy to verify these settings on all databases and in cases of non-complaince how to bring them back into complaince. Database setting to enforce on each user database : Auto Close and Auto Shrink Properties of database set to False Auto Create Statistics and Auto Update Statistics set to True Compatibility Level of all the user database set as 100 Page Verify set as CHECKSUM Recovery Model of all user database set to Full Restrict Access set as MULTI_USER Configure a Policy to Verify Database Settings 1. Connect to SQL Server 2008 Instance using SQL Server Management Studio 2. In the Object Explorer, Click on Management > Policy Management and you will be able to see Policies, Conditions & Facets as child nodes 3. Right click Policies and then select New Policy…. from the drop down list as shown in the snippet below to open the  Create New Policy Popup window. 4. In the Create New Policy popup window you need to provide the name of the policy as “Implementing and Verify Database Settings for Production Databases” and then click the drop down list under Check Condition. As highlighted in the snippet below click on the New Condition… option to open up the Create New Condition window. 5. In the Create New Condition popup window you need to provide the name of the condition as “Verify and Change Database Settings”. In the Facet drop down list you need to choose the Facet as Database Options as shown in the snippet below. Under Expression you need to select Field value as @AutoClose and then choose Operator value as ‘ = ‘ and finally choose Value as False. Now that you have successfully added the first field you can now go ahead and add rest of the fields as shown in the snippet below. Once you have successfully added all the above shown fields of Database Options Facet, click OK to save the changes and to return to the parent Create New Policy – Implementing and Verify Database Settings for Production Database windows where you will see that the newly created condition “Verify and Change Database Settings” is selected by default. Continues…

    Read the article

  • Prepping the Raspberry Pi for Java Excellence (part 1)

    - by HecklerMark
    I've only recently been able to begin working seriously with my first Raspberry Pi, received months ago but hastily shelved in preparation for JavaOne. The Raspberry Pi and other diminutive computing platforms offer a glimpse of the potential of what is often referred to as the embedded space, the "Internet of Things" (IoT), or Machine to Machine (M2M) computing. I have a few different configurations I want to use for multiple Raspberry Pis, but for each of them, I'll need to perform the following common steps to prepare them for their various tasks: Load an OS onto an SD card Get the Pi connected to the network Load a JDK I've been very happy to see good friend and JFXtras teammate Gerrit Grunwald document how to do these things on his blog (link to article here - check it out!), but I ran into some issues configuring wi-fi that caused me some needless grief. Not knowing if any of the pitfalls were caused by my slightly-older version of the Pi and not being able to find anything specific online to help me get past it, I kept chipping away at it until I broke through. The purpose of this post is to (hopefully) help someone else recognize the same issues if/when they encounter them and work past them quickly. There is a great resource page here that covers several ways to get the OS on an SD card, but here is what I did (on a Mac): Plug SD card into reader on/in Mac Format it (FAT32) Unmount it (diskutil unmountDisk diskn, where n is the disk number representing the SD card) Transfer the disk image for Debian to the SD card (dd if=2012-08-08-wheezy-armel.img of=/dev/diskn bs=1m) Eject the card from the Mac (diskutil eject diskn) There are other ways, but this is fairly quick and painless, especially after you do it several times. Yes, I had to do that dance repeatedly (minus formatting) due to the wi-fi issues, as it kept killing the ability of the Pi to boot. You should be able to dramatically reduce the number of OS loads you do, though, if you do a few things with regard to your wi-fi. Firstly, I strongly recommend you purchase the Edimax EW-7811Un wi-fi adapter. This adapter/chipset has been proven with the Raspberry Pi, it's tiny, and it's cheap. Avoid unnecessary aggravation and buy this one! Secondly, visit this page for a script and instructions regarding how to configure your new wi-fi adapter with your Pi. Here is the rub, though: there is a missing step. At least for my combination of Pi version, OS version, and uncanny gift of timing and luck there was. :-) Here is the sequence of steps I used to make the magic happen: Plug your newly-minted SD card (with OS) into your Pi and connect a network cable (for internet connectivity) Boot your Pi. On the first boot, do the following things: Opt to have it use all space on the SD card (will require a reboot eventually) Disable overscan Set your timezone Enable the ssh server Update raspi-config Reboot your Pi. This will reconfigure the SD to use all space (see above). After you log in (UID: pi, password: raspberry), upgrade your OS. This was the missing step for me that put a merciful end to the repeated SD card re-imaging and made the wi-fi configuration trivial. To do so, just type sudo apt-get upgrade and give it several minutes to complete. Pour yourself a cup of coffee and congratulate yourself on the time you've just saved.  ;-) With the OS upgrade finished, now you can follow Mr. Engman's directions (to the letter, please see link above), download his script, and let it work its magic. One aside: I plugged the little power-sipping Edimax directly into the Pi and it worked perfectly. No powered hub needed, at least in my configuration. To recap, that OS upgrade (at least at this point, with this combination of OS/drivers/Pi version) is absolutely essential for a smooth experience. Miss that step, and you're in for hours of "fun". Save yourself! I'll pick up next time with more of the Java side of the RasPi configuration, but as they say, you have to cross the moat to get into the castle. Hopefully, this will help you do just that. Until next time! All the best, Mark 

    Read the article

  • A quick look at: sys.dm_os_buffer_descriptors

    - by Jonathan Allen
    SQL Server places data into cache as it reads it from disk so as to speed up future queries. This dmv lets you see how much data is cached at any given time and knowing how this changes over time can help you ensure your servers run smoothly and are adequately resourced to run your systems. This dmv gives the number of cached pages in the buffer pool along with the database id that they relate to: USE [tempdb] GO SELECT COUNT(*) AS cached_pages_count , CASE database_id WHEN 32767 THEN 'ResourceDb' ELSE DB_NAME(database_id) END AS Database_name FROM sys.dm_os_buffer_descriptors GROUP BY DB_NAME(database_id) , database_id ORDER BY cached_pages_count DESC; This gives you results which are quite useful, but if you add a new column with the code: …to convert the pages value to show a MB value then they become more relevant and meaningful. To see how your server reacts to queries, start up SSMS and connect to a test server and database – mine is called AdventureWorks2008. Make sure you start from a know position by running: -- Only run this on a test server otherwise your production server's-- performance may drop off a cliff and your phone will start ringing. DBCC DROPCLEANBUFFERS GO Now we can run a query that would normally turn a DBA’s hair white: USE [AdventureWorks2008] go SELECT * FROM [Sales].[SalesOrderDetail] AS sod INNER JOIN [Sales].[SalesOrderHeader] AS soh ON [sod].[SalesOrderID] = [soh].[SalesOrderID] …and then check our cache situation: A nice low figure – not! Almost 2000 pages of data in cache equating to approximately 15MB. Luckily these tables are quite narrow; if this had been on a table with more columns then this could be even more dramatic. So, let’s make our query more efficient. After resetting the cache with the DROPCLEANBUFFERS and FREEPROCCACHE code above, we’ll only select the columns we want and implement a WHERE predicate to limit the rows to a specific customer. SELECT [sod].[OrderQty] , [sod].[ProductID] , [soh].[OrderDate] , [soh].[CustomerID] FROM [Sales].[SalesOrderDetail] AS sod INNER JOIN [Sales].[SalesOrderHeader] AS soh ON [sod].[SalesOrderID] = [soh].[SalesOrderID] WHERE [soh].[CustomerID] = 29722 …and check our effect cache: Now that is more sympathetic to our server and the other systems sharing its resources. I can hear you asking: “What has this got to do with logging, Jonathan?” Well, a smart DBA will keep an eye on this metric on their servers so they know how their hardware is coping and be ready to investigate anomalies so that no ‘disruptive’ code starts to unsettle things. Capturing this information over a period of time can lead you to build a picture of how a database relies on the cache and how it interacts with other databases. This might allow you to decide on appropriate schedules for over night jobs or otherwise balance the work of your server. You could schedule this job to run with a SQL Agent job and store the data in your DBA’s database by creating a table with: IF OBJECT_ID('CachedPages') IS NOT NULL DROP TABLE CachedPages CREATE TABLE CachedPages ( cached_pages_count INT , MB INT , Database_Name VARCHAR(256) , CollectedOn DATETIME DEFAULT GETDATE() ) …and then filling it with: INSERT INTO [dbo].[CachedPages] ( [cached_pages_count] , [MB] , [Database_Name] ) SELECT COUNT(*) AS cached_pages_count , ( COUNT(*) * 8.0 ) / 1024 AS MB , CASE database_id WHEN 32767 THEN 'ResourceDb' ELSE DB_NAME(database_id) END AS Database_name FROM sys.dm_os_buffer_descriptors GROUP BY database_id After this has been left logging your system metrics for a while you can easily see how your databases use the cache over time and may see some spikes that warrant your attention. This sort of logging can be applied to all sorts of server statistics so that you can gather information that will give you baseline data on how your servers are performing. This means that when you get a problem you can see what statistics are out of their normal range and target you efforts to resolve the issue more rapidly.

    Read the article

  • MP3 player mpman TS300 not recognized

    - by Nick Lemaire
    I just received a mpman ts300 mp3 player for Christmas. But when I try to connect it to ubuntu (10.10) through usb it doesn't seem to be recognized. I searched for a linuxdriver but came up with nothing. I even tried installing mpman manager from the software center, but still the same problem... Has anybody got any ideas about how to fix this? Thank you edit: some additional information When I plug it in, it doesn't show up under /dev. And it is not listed in the result of 'lsusb' This is the output of 'dmesg' [19571.732541] usb 1-4: new high speed USB device using ehci_hcd and address 2 [19571.889154] scsi7 : usb-storage 1-4:1.0 [19572.883155] scsi 7:0:0:0: Direct-Access TS300 USB DISK 1.00 PQ: 0 ANSI: 0 [19572.885856] sd 7:0:0:0: Attached scsi generic sg2 type 0 [19572.887266] sd 7:0:0:0: [sdb] 7868416 512-byte logical blocks: (4.02 GB/3.75 GiB) [19573.012547] usb 1-4: reset high speed USB device using ehci_hcd and address 2 [19573.292550] usb 1-4: reset high speed USB device using ehci_hcd and address 2 [19573.572565] usb 1-4: reset high speed USB device using ehci_hcd and address 2 [19573.725019] sd 7:0:0:0: [sdb] Test WP failed, assume Write Enabled [19573.725024] sd 7:0:0:0: [sdb] Assuming drive cache: write through [19573.728521] sd 7:0:0:0: [sdb] Attached SCSI removable disk [19575.333015] sd 7:0:0:0: [sdb] 7868416 512-byte logical blocks: (4.02 GB/3.75 GiB) [19575.452547] usb 1-4: reset high speed USB device using ehci_hcd and address 2 [19580.603253] usb 1-4: device descriptor read/all, error -110 [19580.722560] usb 1-4: reset high speed USB device using ehci_hcd and address 2 [19595.842579] usb 1-4: device descriptor read/64, error -110 [19611.072656] usb 1-4: device descriptor read/64, error -110 [19611.302540] usb 1-4: reset high speed USB device using ehci_hcd and address 2 [19616.332568] usb 1-4: device descriptor read/8, error -110 [19621.462692] usb 1-4: device descriptor read/8, error -110 [19621.692551] usb 1-4: reset high speed USB device using ehci_hcd and address 2 [19626.722567] usb 1-4: device descriptor read/8, error -110 [19631.852802] usb 1-4: device descriptor read/8, error -110 [19631.962587] usb 1-4: USB disconnect, address 2 [19631.962587] sd 7:0:0:0: Device offlined - not ready after error recovery [19631.962840] sd 7:0:0:0: [sdb] Assuming drive cache: write through [19631.965104] sd 7:0:0:0: [sdb] READ CAPACITY failed [19631.965109] sd 7:0:0:0: [sdb] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [19631.965112] sd 7:0:0:0: [sdb] Sense not available. [19631.965125] sd 7:0:0:0: [sdb] Assuming drive cache: write through [19631.965130] sdb: detected capacity change from 4028628992 to 0 [19632.130042] usb 1-4: new high speed USB device using ehci_hcd and address 3 [19647.242550] usb 1-4: device descriptor read/64, error -110 [19662.472560] usb 1-4: device descriptor read/64, error -110 [19662.702566] usb 1-4: new high speed USB device using ehci_hcd and address 4 [19677.822587] usb 1-4: device descriptor read/64, error -110 [19693.052575] usb 1-4: device descriptor read/64, error -110 [19693.282582] usb 1-4: new high speed USB device using ehci_hcd and address 5 [19698.312600] usb 1-4: device descriptor read/8, error -110 [19703.442594] usb 1-4: device descriptor read/8, error -110 [19703.672548] usb 1-4: new high speed USB device using ehci_hcd and address 6 [19708.702581] usb 1-4: device descriptor read/8, error -110 [19713.840077] usb 1-4: device descriptor read/8, error -110 [19713.942555] hub 1-0:1.0: unable to enumerate USB device on port 4 [19714.262545] usb 4-2: new full speed USB device using uhci_hcd and address 2 [19729.382549] usb 4-2: device descriptor read/64, error -110 [19744.612534] usb 4-2: device descriptor read/64, error -110 [19744.842543] usb 4-2: new full speed USB device using uhci_hcd and address 3 [19759.962714] usb 4-2: device descriptor read/64, error -110 [19775.200047] usb 4-2: device descriptor read/64, error -110 [19775.422630] usb 4-2: new full speed USB device using uhci_hcd and address 4 [19780.444498] usb 4-2: device descriptor read/8, error -110 [19785.574491] usb 4-2: device descriptor read/8, error -110 [19785.802547] usb 4-2: new full speed USB device using uhci_hcd and address 5 [19790.824490] usb 4-2: device descriptor read/8, error -110 [19795.961473] usb 4-2: device descriptor read/8, error -110 [19796.062556] hub 4-0:1.0: unable to enumerate USB device on port 2

    Read the article

  • Tuning Red Gate: #3 of Lots

    - by Grant Fritchey
    I'm drilling down into the metrics about SQL Server itself available to me in the Analysis tab of SQL Monitor to see what's up with our two problematic servers. In the previous post I'd noticed that rg-sql01 had quite a few CPU spikes. So one of the first things I want to check there is how much CPU is getting used by SQL Server itself. It's possible we're looking at some other process using up all the CPU Nope, It's SQL Server. I compared this to the rg-sql02 server: You can see that there is a more, consistently low set of CPU counters there. I clearly need to look at rg-sql01 and capture more specific data around the queries running on it to identify which ones are causing these CPU spikes. I always like to look at the Batch Requests/sec on a server, not because it's an indication of a problem, but because it gives you some idea of the load. Just how much is this server getting hit? Here are rg-sql01 and rg-sql02: Of the two, clearly rg-sql01 has a lot of activity. Remember though, that's all this is a measure of, activity. It doesn't suggest anything other than what it says, the number of requests coming in. But it's the kind of thing you want to know in order to understand how the system is used. Are you seeing a correlation between the number of requests and the CPU usage, or a reverse correlation, the number of requests drops as the CPU spikes? See, it's useful. Some of the details you can look at are Compilations/sec, Compilations/Batch and Recompilations/sec. These give you some idea of how the cache is getting used within the system. None of these showed anything interesting on either server. One metric that I like (even though I know it can be controversial) is the Page Life Expectancy. On the average server I expect see a series of mountains as the PLE climbs then drops due to a data load or something along those lines. That's not the case here: Those spikes back in January suggest that the servers weren't really being used much. The PLE on the rg-sql01 seems to be somewhat consistent growing to 3 hours or so then dropping, but the rg-sql02 PLE looks like it might be all over the map. Instead of continuing to look at this high level gathering data view, I'm going to drill down on rg-sql02 and see what it's done for the last week: And now we begin to see where we might have an issue. Memory on this system is getting flushed every 1/2 hour or so. I'm going to check another metric, scans: Whoa! I'm going back to the system real quick to look at some disk information again for rg-sql02. Here is the average disk queue length on the server: and the transfers Right, I think I have a guess as to what's up here. We're seeing memory get flushed constantly and we're seeing lots of scans. The disks are queuing, especially that F drive, and there are lots of requests that correspond to the scans and the memory flushes. In short, we've got queries that are scanning the data, a lot, so we either have bad queries or bad indexes. I'm going back to the server overview for rg-sql02 and check the Top 10 expensive queries. I'm modifying it to show me the last 3 days and the totals, so I'm not looking at some maintenance routine that ran 10 minutes ago and is skewing the results: OK. I need to look into these queries that are getting executed this much. They're generating a lot of reads, but which queries are generating the most reads: Ow, all still going against the same database. This is where I'm going to temporarily leave SQL Monitor. What I want to do is connect up to the server, validate that the Warehouse database is using the F:\ drive (which I'll put money down it is) and then start seeing what's up with these queries. Part 1 of the Series Part 2 of the Series

    Read the article

  • Configuring Multiple Instances of MySQL in Solaris 11

    - by rajeshr
    Recently someone asked me for steps to configure multiple instances of MySQL database in an Operating Platform. Coz of my familiarity with Solaris OE, I prepared some notes on configuring multiple instances of MySQL database on Solaris 11. Maybe it's useful for some: If you want to run Solaris Operating System (or any other OS of your choice) as a virtualized instance in desktop, consider using Virtual Box. To download Solaris Operating System, click here. Once you have your Solaris Operating System (Version 11) up and running and have Internet connectivity to gain access to the Image Packaging System (IPS), please follow the steps as mentioned below to install MySQL and configure multiple instances: 1. Install MySQL Database in Solaris 11 $ sudo pkg install mysql-51 2. Verify if the mysql is installed: $ svcs -a | grep mysql Note: Service FMRI will look similar to the one here: svc:/application/database/mysql:version_51 3. Prepare data file system for MySQL Instance 1 zfs create rpool/mysql zfs create rpool/mysql/data zfs set mountpoint=/mysql/data rpool/mysql/data 4. Prepare data file system for MySQL Instance 2 zfs create rpool/mysql/data2 zfs set mountpoint=/mysql/data rpool/mysql/data2 5. Change the mysql/datadir of the MySQL Service (SMF) to point to /mysql/data $ svcprop mysql:version_51 | grep mysql/data $ svccfg -s mysql:version_51 setprop mysql/data=/mysql/data 6. Create a new instance of MySQL 5.1 (a) Copy the manifest of the default instance to temporary directory: $ sudo cp /lib/svc/manifest/application/database/mysql_51.xml /var/tmp/mysql_51_2.xml (b) Make appropriate modifications on the XML file $ sudo vi /var/tmp/mysql_51_2.xml - Change the "instance name" section to a new value "version_51_2" - Change the value of property name "data" to point to the ZFS file system "/mysql/data2" 7. Import the manifest to the SMF repository: $ sudo svccfg import /var/tmp/mysql_51_2.xml 8. Before starting the service, copy the file /etc/mysql/my.cnf to the data directories /mysql/data & /mysql/data2. $ sudo cp /etc/mysql/my.cnf /mysql/data/ $ sudo cp /etc/mysql/my.cnf /mysql/data2/ 9. Make modifications to the my.cnf in each of the data directories as required: $ sudo vi /mysql/data/my.cnf Under the [client] section port=3306 socket=/tmp/mysql.sock ---- ---- Under the [mysqld] section port=3306 socket=/tmp/mysql.sock datadir=/mysql/data ----- ----- server-id=1 $ sudo vi /mysql/data2/my.cnf Under the [client] section port=3307 socket=/tmp/mysql2.sock ----- ----- Under the [mysqld] section port=3307 socket=/tmp/mysql2.sock datadir=/mysql/data2 ----- ----- server-id=2 10. Make appropriate modification to the startup script of MySQL (managed by SMF) to point to the appropriate my.cnf for each instance: $ sudo vi /lib/svc/method/mysql_51 Note: Search for all occurences of mysqld_safe command and modify it to include the --defaults-file option. An example entry would look as follows: ${MySQLBIN}/mysqld_safe --defaults-file=${MYSQLDATA}/my.cnf --user=mysql --datadir=${MYSQLDATA} --pid=file=${PIDFILE} 11. Start the service: $ sudo svcadm enable mysql:version_51_2 $ sudo svcadm enable mysql:version_51 12. Verify that the two services are running by using: $ svcs mysql 13. Verify the processes: $ ps -ef | grep mysqld 14. Connect to each mysqld instance and verify: $ mysql --defaults-file=/mysql/data/my.cnf -u root -p $ mysql --defaults-file=/mysql/data2/my.cnf -u root -p Some references for Solaris 11 newbies Taking your first steps with Solaris 11 Introducing the basics of Image Packaging System Service Management Facility How To Guide For a detailed list of official educational modules available on Solaris 11, please visit here For MySQL courses from Oracle University access this page.

    Read the article

  • Windows 8 Apps with HTML5 and JavaScript

    - by Stephen.Walther
    Last week, I finished writing Windows 8 Apps with HTML5 and JavaScript – Yikes! That is a long title. This book is all about writing apps for Windows 8 which can be added to the Windows Store. The book focuses on building apps using HTML5 and JavaScript. If you are already comfortable building websites, then building Windows Store apps is not a huge leap.  I explain how you can create productivity apps, like a Task List app, and games, like a simple arcade game. I also explain how you can publish your app to the Windows Store and make money. To celebrate the release of Windows 8, my publisher is offering a huge 40% discount on the book until November 30, 2012. If you want to take advantage of this discount, follow the link below and enter the discount code WINDEV40 during checkout. http://www.informit.com/promotions/promotion.aspx?promo=139036&walther So what’s in the book?  Here’s an overview of each of the chapters: Chapter 1 – Building Windows Store Apps Contains a walkthrough of creating a super simple Windows app for taking pictures from your webcam. Explains how to publish your app to the Windows Store. Chapter 2 – WinJS Fundamentals Provides an overview of the Windows Library for JavaScript which is the Microsoft library for creating Windows Store apps with JavaScript. Chapter 3 – Observables, Bindings, and Templates You learn how to display a list of items using a template. For example, you learn how to create a template which can be used to display a list of products. Chapter 4 – Using WinJS Controls Overview of the core set of JavaScript controls included with the WinJS library. You learn how to use the Tooltip, ToggleSwitch, Rating, DatePicker, TimePicker, and FlipView controls. Chapter 5 – Creating Forms This chapter explains how to take advantage of HTML5 forms to display specialized keyboards and perform form validation. Chapter 6 – Menus and Flyouts You learn how to display popups, menus, and toolbars using the JavaScript controls included with the WinJS library. Chapter 7 – Using the ListView Control This entire chapter is devoted to the ListView control which is the most important control in the WinJS library. You can use the ListView control to display, sort, filter, and edit a list of items. Chapter 8 – Creating Data Sources Learn how to use a ListView control to display data from the file system, a web service, and IndexedDB. Chapter 9 – App Events and States This chapter explains the standard application events which are raised in a Windows Store app such as the activated and checkpoint events. You also learn how to build apps which adapt automatically to different view states such as portrait and landscape. Chapter 10 – Page Fragments and Navigation This chapter discusses two subjects: You learn how to create custom WinJS controls with Page Controls and you learn how to build apps with multiple pages.  Chapter 11 – Using the Live Connect API Learn how to use Windows Live Services to authenticate users, interact with SkyDrive, and retrieve user profile information (such as a user’s birthday or profile picture). Chapter 12 – Graphics and Games This chapter is devoted to building the Brain Eaters app which is a simple arcade game. Navigate a maze and eat all of the food pellets while avoiding the brain-eating zombies to win the game. Learn how to create the game using HTML5 Canvas.   If you want to buy the book, remember to use the magic discount code WINDEV40 and visit the following link: http://www.informit.com/promotions/promotion.aspx?promo=139036&walther

    Read the article

  • SQL SERVER – Why Do We Need Data Quality Services – Importance and Significance of Data Quality Services (DQS)

    - by pinaldave
    Databases are awesome.  I’m sure my readers know my opinion about this – I have made SQL Server my life’s work after all!  I love technology and all things computer-related.  Of course, even with my love for technology, I have to admit that it has its limits.  For example, it takes a human brain to notice that data has been input incorrectly.  Computer “brains” might be faster than humans, but human brains are still better at pattern recognition.  For example, a human brain will notice that “300” is a ridiculous age for a human to be, but to a computer it is just a number.  A human will also notice similarities between “P. Dave” and “Pinal Dave,” but this would stump most computers. In a database, these sorts of anomalies are incredibly important.  Databases are often used by multiple people who rely on this data to be true and accurate, so data quality is key.  That is why the improved SQL Server features Master Data Management talks about Data Quality Services.  This service has the ability to recognize and flag anomalies like out of range numbers and similarities between data.  This allows a human brain with its pattern recognition abilities to double-check and ensure that P. Dave is the same as Pinal Dave. A nice feature of Data Quality Services is that once you set the rules for the program to follow, it will not only keep your data organized in the future, but go to the past and “fix up” any data that has already been entered.  It also allows you do combine data from multiple places and it will apply these rules across the board, so that you don’t have any weird issues that crop up when trying to fit a round peg into a square hole. There are two parts of Data Quality Services that help you accomplish all these neat things.  The first part is DQL Server, which you can think of as the hardware component of the system.  It is installed on the side of (it needs to install separately after SQL Server is installed) SQL Server and runs quietly in the background, performing all its cleanup services. DQS Client is the user interface that you can interact with to set the rules and check over your data.  There are three main aspects of Client: knowledge base management, data quality projects and administration.  Knowledge base management is the part of the system that allows you to set the rules, or program the “knowledge base,” so that your database is clean and consistent. Data Quality projects are what run in the background and clean up the data that is already present.  The administration allows you to check out what DQS Client is doing, change rules, and generally oversee the entire process.  The whole process is user-friendly and a pleasure to use.  I highly recommend implementing Data Quality Services in your database. Here are few of my blog posts which are related to Data Quality Services and I encourage you to try this out. SQL SERVER – Installing Data Quality Services (DQS) on SQL Server 2012 SQL SERVER – Step by Step Guide to Beginning Data Quality Services in SQL Server 2012 – Introduction to DQS SQL SERVER – DQS Error – Cannot connect to server – A .NET Framework error occurred during execution of user-defined routine or aggregate “SetDataQualitySessions” – SetDataQualitySessionPhaseTwo SQL SERVER – Configuring Interactive Cleansing Suggestion Min Score for Suggestions in Data Quality Services (DQS) – Sensitivity of Suggestion SQL SERVER – Unable to DELETE Project in Data Quality Projects (DQS) Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • Oracle Products Reflect Key Trends Shaping Enterprise 2.0

    - by kellsey.ruppel(at)oracle.com
    Following up on his predictions for 2011, we asked Enterprise 2.0 veteran Andy MacMillan to map out the ways Oracle solutions are at the forefront of industry trends--and how Oracle customers can benefit in the coming year. 1. Increase organizational awareness | Oracle WebCenter Suite Oracle WebCenter Suite provides a unique set of capabilities to drive organizational awareness. In particular, the expansive activity graph connects users directly to key enterprise applications, activities, and interests. In this way, applicable and critical business information is automatically and immediately visible--in the context of key tasks--via real-time dashboards and comprehensive reporting. Oracle WebCenter Suite also integrates key E2.0 services, such as blogs, wikis, and RSS feeds, into critical business processes, including back-office systems of records such as ERP and CRM systems. 2. Drive online customer engagement | Oracle Real-Time Decisions With more and more business being conducted on the Web, driving increased online customer engagement becomes a critical key to success. This effort is usually spearheaded by an increasingly important executive role, the Head of Online, who usually reports directly to the CMO. To help manage the Web experience online, Oracle solutions are driving a new kind of intelligent social commerce by combining Oracle Universal Content Management, Oracle WebCenter Services, and Oracle Real-Time Decisions with leading e-commerce and product recommendations. Oracle Real-Time Decisions provides multichannel recommendations for content, products, and services--including seamless integration across Web, mobile, and social channels. The result: happier customers, increased customer acquisition and retention, and improved critical success metrics such as shopping cart abandonment. 3. Easily build composite applications | Oracle Application Development Framework Thanks to the shared user experience strategy across Oracle Fusion Middleware, Oracle Fusion Applications and many other Oracle Applications, customers can easily create real, customer-specific composite applications using Oracle WebCenter Suite and Oracle Application Development Framework. Oracle Application Development Framework components provide modular user interface components that can build rich, social composite applications. In addition, a broad set of components spanning BPM, SOA, ECM, and beyond can be quickly and easily incorporated into composite applications. 4. Integrate records management into a global content platform | Oracle Enterprise Content Management 11g Oracle Enterprise Content Management 11g provides leading records management capabilities as part of a unified ECM platform for managing records, documents, Web content, digital assets, enterprise imaging, and application imaging. This unique strategy provides comprehensive records management in a consistent, cost-effective way, and enables organizations to consolidate ECM repositories and connect ECM to critical business applications. 5. Achieve ECM at extreme scale | Oracle WebLogic Server and Oracle Exadata To support the high-performance demands of a unified and rationalized content platform, Oracle has pioneered highly scalable and high-performing ECM infrastructures. Two innovations in particular helped make this happen. The core ECM platform itself moved to an Enterprise Java architecture, so organizations can now use Oracle WebLogic Server for enhanced scalability and manageability. Oracle Enterprise Content Management 11g can leverage Oracle Exadata for extreme performance and scale. Likewise, Oracle Exalogic--Oracle's foundation for cloud computing--enables extreme performance for processor-intensive capabilities such as content conversion or dynamic Web page delivery. Learn more about Oracle's Enterprise 2.0 solutions.

    Read the article

  • Bridging the Gap in Cloud, Big Data, and Real-time

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} With all the buzz of around big data and cloud computing, it is easy to overlook one of your most precious commodities—your data. Today’s businesses cannot stand still when it comes to data. Market success now depends on speed, volume, complexity, and keeping pace with the latest data integration breakthroughs. Are you up to speed with big data, cloud integration, real-time analytics? Join us in this three part blog series where we’ll look at each component in more detail. Meet us online on October 24th where we’ll take your questions about what issues you are facing in this brave new world of integration. Let’s start first with Cloud. What happens with your data when you decide to implement a private cloud architecture? Or public cloud? Data integration solutions play a vital role migrating data simply, efficiently, and reliably to the cloud; they are a necessary ingredient of any platform as a service strategy because they support cloud deployments with data-layer application integration between on-premise and cloud environments of all kinds. For private cloud architectures, consolidation of your databases and data stores is an important step to take to be able to receive the full benefits of cloud computing. Private cloud integration requires bidirectional replication between heterogeneous systems to allow you to perform data consolidation without interrupting your business operations. In addition, integrating data requires bulk load and transformation into and out of your private cloud is a crucial step for those companies moving to private cloud. In addition, the need for managing data services as part of SOA/BPM solutions that enable agile application delivery and help build shared data services for organizations. But what about public Cloud? If you have moved your data to a public cloud application, you may also need to connect your on-premise enterprise systems and the cloud environment by moving data in bulk or as real-time transactions across geographies. For public and private cloud architectures both, Oracle offers a complete and extensible set of integration options that span not only data integration but also service and process integration, security, and management. For those companies investing in Oracle Cloud, you can move your data through Oracle SOA Suite using REST APIs to Oracle Messaging Cloud Service —a new service that lets applications deployed in Oracle Cloud securely and reliably communicate over Java Messaging Service . As an example of loading and transforming data into other public clouds, Oracle Data Integrator supports a knowledge module for Salesforce.com—now available on AppExchange. Other third-party knowledge modules are being developed by customers and partners every day. To learn more about how to leverage Oracle’s Data Integration products for Cloud, join us live: Data Integration Breakthroughs Webcast on October 24th 10 AM PST.

    Read the article

< Previous Page | 540 541 542 543 544 545 546 547 548 549 550 551  | Next Page >