Search Results

Search found 37875 results on 1515 pages for 'version space'.

Page 629/1515 | < Previous Page | 625 626 627 628 629 630 631 632 633 634 635 636  | Next Page >

  • How to Attach Sticky-Note Reminders to Windows and Applications

    - by Erez Zukerman
    Some applications come with a boatload of keyboard shortcuts; these can make you very fast, but can be difficult to remember, especially if you customized some of them. What if you could have your own little cheat sheet that would pop up next to the application every time your ran it? Read on to see how you can make one. We’re going to be using an excellent (and free) application called Stickies. If you don’t have it yet, go to the Stickies homepage, download it, and install it. Latest Features How-To Geek ETC How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? Get the MakeUseOf eBook Guide to Hacker Proofing Your PC Sync Your Windows Computer with Your Ubuntu One Account [Desktop Client] Awesome 10 Meter Curved Touchscreen at the University of Groningen [Video] TV Antenna Helper Makes HDTV Antenna Calibration a Snap Turn a Green Laser into a Microscope Projector [Science] The Open Road Awaits [Wallpaper]

    Read the article

  • Microsoft&rsquo;s new technical computing initiative

    - by Randy Walker
    I made a mental note from earlier in the year.  Microsoft literally buys computers by the truckload.  From what I understand, it’s a typical practice amongst large software vendors.  You plug a few wires in, you test it, and you instantly have mega tera tera flops (don’t hold me to that number).  Microsoft has been trying to plug away at their cloud services (named Azure).  Which, for the layman, means Microsoft runs your software on their computers, and as demand increases you can allocate more computing power on the fly. With this in mind, it doesn’t surprise me that I was recently sent an executive email concerning Microsoft’s new technical computing initiative.  I find it to be a great marketing idea with actual substance behind their real work.  From the programmer academic perspective, in college we dreamed about this type of processing power.  This has decades of computer science theory behind it. A copy of the email received.  (note that I almost deleted this email, thinking it was spam due to it’s length) We don't often think about how complex life really is. Take the relatively simple task of commuting to and from work: it is, in fact, a complicated interplay of variables such as weather, train delays, accidents, traffic patterns, road construction, etc. You can however, take steps to shorten your commute - using a good, predictive understanding of a few of these variables. In fact, you probably are already taking these inputs and instinctively building a predictive model that you act on daily to get to your destination more quickly. Now, when we apply the same method to very complex tasks, this modeling approach becomes much more challenging. Recent world events clearly demonstrated our inability to process vast amounts of information and variables that would have helped to more accurately predict the behavior of global financial markets or the occurrence and impact of a volcano eruption in Iceland. To make sense of issues like these, researchers, engineers and analysts create computer models of the almost infinite number of possible interactions in complex systems. But, they need increasingly more sophisticated computer models to better understand how the world behaves and to make fact-based predictions about the future. And, to do this, it requires a tremendous amount of computing power to process and examine the massive data deluge from cameras, digital sensors and precision instruments of all kinds. This is the key to creating more accurate and realistic models that expose the hidden meaning of data, which gives us the kind of insight we need to solve a myriad of challenges. We have made great strides in our ability to build these kinds of computer models, and yet they are still too difficult, expensive and time consuming to manage. Today, even the most complicated data-rich simulations cannot fully capture all of the intricacies and dependencies of the systems they are trying to model. That is why, across the scientific and engineering world, it is so hard to say with any certainty when or where the next volcano will erupt and what flight patterns it might affect, or to more accurately predict something like a global flu pandemic. So far, we just cannot collect, correlate and compute enough data to create an accurate forecast of the real world. But this is about to change. Innovations in technology are transforming our ability to measure, monitor and model how the world behaves. The implication for scientific research is profound, and it will transform the way we tackle global challenges like health care and climate change. It will also have a huge impact on engineering and business, delivering breakthroughs that could lead to the creation of new products, new businesses and even new industries. Because you are a subscriber to executive e-mails from Microsoft, I want you to be the first to know about a new effort focused specifically on empowering millions of the world's smartest problem solvers. Today, I am happy to introduce Microsoft's Technical Computing initiative. Our goal is to unleash the power of pervasive, accurate, real-time modeling to help people and organizations achieve their objectives and realize their potential. We are bringing together some of the brightest minds in the technical computing community across industry, academia and science at www.modelingtheworld.com to discuss trends, challenges and shared opportunities. New advances provide the foundation for tools and applications that will make technical computing more affordable and accessible where mathematical and computational principles are applied to solve practical problems. One day soon, complicated tasks like building a sophisticated computer model that would typically take a team of advanced software programmers months to build and days to run, will be accomplished in a single afternoon by a scientist, engineer or analyst working at the PC on their desktop. And as technology continues to advance, these models will become more complete and accurate in the way they represent the world. This will speed our ability to test new ideas, improve processes and advance our understanding of systems. Our technical computing initiative reflects the best of Microsoft's heritage. Ever since Bill Gates articulated the then far-fetched vision of "a computer on every desktop" in the early 1980's, Microsoft has been at the forefront of expanding the power and reach of computing to benefit the world. As someone who worked closely with Bill for many years at Microsoft, I am happy to share with you that the passion behind that vision is fully alive at Microsoft and is carried out in the creation of our new Technical Computing group. Enabling more people to make better predictions We have seen the impact of making greater computing power more available firsthand through our investments in high performance computing (HPC) over the past five years. Scientists, engineers and analysts in organizations of all sizes and sectors are finding that using distributed computational power creates societal impact, fuels scientific breakthroughs and delivers competitive advantages. For example, we have seen remarkable results from some of our current customers: Malaria strikes 300,000 to 500,000 people around the world each year. To help in the effort to eradicate malaria worldwide, scientists at Intellectual Ventures use software that simulates how the disease spreads and would respond to prevention and control methods, such as vaccines and the use of bed nets. Technical computing allows researchers to model more detailed parameters for more accurate results and receive those results in less than an hour, rather than waiting a full day. Aerospace engineering firm, a.i. solutions, Inc., needed a more powerful computing platform to keep up with the increasingly complex computational needs of its customers: NASA, the Department of Defense and other government agencies planning space flights. To meet that need, it adopted technical computing. Now, a.i. solutions can produce detailed predictions and analysis of the flight dynamics of a given spacecraft, from optimal launch times and orbit determination to attitude control and navigation, up to eight times faster. This enables them to avoid mistakes in any areas that can cause a space mission to fail and potentially result in the loss of life and millions of dollars. Western & Southern Financial Group faced the challenge of running ever larger and more complex actuarial models as its number of policyholders and products grew and regulatory requirements changed. The company chose an actuarial solution that runs on technical computing technology. The solution is easy for the company's IT staff to manage and adjust to meet business needs. The new solution helps the company reduce modeling time by up to 99 percent - letting the team fine-tune its models for more accurate product pricing and financial projections. Our Technical Computing direction Collaborating closely with partners across industry and academia, we must now extend the reach of technical computing even further to help predictive modelers and data explorers make faster, more accurate predictions. As we build the Technical Computing initiative, we will invest in three core areas: Technical computing to the cloud: Microsoft will play a leading role in bringing technical computing power to scientists, engineers and analysts through the cloud. Existing high- performance computing users will benefit from the ability to augment their on-premises systems with cloud resources that enable 'just-in-time' processing. This platform will help ensure processing resources are available whenever they are needed-reliably, consistently and quickly. Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores, but most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test and trouble shoot. However, a consistent model for parallel programming can help more developers unlock the tremendous power in today's modern computers and enable a new generation of technical computing. We are delivering new tools to automate and simplify writing software through parallel processing from the desktop... to the cluster... to the cloud. Develop powerful new technical computing tools and applications: We know scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power and simplified tools to increase the speed of their work. We are building a platform to do this. Our development efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration. This will allow them to spend more time on their work and less time wrestling with complicated technology. Thinking bigger There is so much left to be discovered and so many questions yet to be answered in the fascinating world around us. We believe the technical computing community will show us that we have not seen anything yet. Imagine just some of the breakthroughs this community could make possible: Better predictions to help improve the understanding of pandemics, contagion and global health trends. Climate change models that predict environmental, economic and human impact, accessible in real-time during key discussions and debates. More accurate prediction of natural disasters and their impact to develop more effective emergency response plans. With an ambitious charter in hand, this new team is ready to build on our progress to-date and execute Microsoft's technical computing vision over the months and years ahead. We will steadily invest in the right technologies, tools and talent, and work to bring together the technical computing community. I invite you to visit www.modelingtheworld.com today. We welcome your ideas and feedback. I look forward to making this journey with you and others who want to answer the world's biggest questions, discover solutions to problems that seem impossible and uncover a host of new opportunities to change the world we live in for the better. Bob

    Read the article

  • Dryad and DryadLINQ from MSR

    - by Daniel Moth
    Microsoft Research (MSR) researches technologies, incubates projects which many times result in technology that looks like a ready-to-use product (but it is important to understand that these are not the same as products built by the various… actual product teams here at Microsoft). A very popular MSR project has been DryadLINQ, which itself builds on Dryad. To learn more follow the project pages I just linked to and I also recommend this 1-hour channel 9 video. If you only have 3 minutes, watch this great elevator pitch instead. You can also stay tuned on the official blog, which includes a post that refers to internal adoption e.g by Bing, a quick DryadLINQ code example, and some history on how DryadLINQ generalizes the MapReduce pattern and makes it accessible to regular programmers (see this post and that post). Essentially, the DryadLINQ framework (building on the Dryad runtime) allows developers to re-use their LINQ skills for creating/generating programs that process large multi-gigabyte/terabyte datasets across 100s-1000s of machines. One way to think about it is that just as Parallel LINQ allows LINQ developers to seamlessly use multiple cores from a single process on a single machine, DryadLINQ allows LINQ developers to seamlessly use multiple machines for their data parallel algorithms. In the former scenario the motivation was speed of execution, in the latter it is speed of execution AND processing large datasets that simply don't fit on a single machine. Whenever I hear about execution of parallel code on multiple machines on the Microsoft platform, I immediately think of Windows HPC Server. Indeed Dryad and DryadLINQ were made available for Windows HPC Server and I encourage you to watch the PDC session on this topic: Data-Intensive Computing on Windows HPC Server with the DryadLINQ Framework. Watch this space… Comments about this post welcome at the original blog.

    Read the article

  • Is the Forcefield Really On or Not? [Star Wars Parody Video]

    - by Asian Angel
    What do you get when you mix two not so smart troopers, a very observant princess, and spotty forcefield service? Lots of trouble! Watch as these two troopers use their own brand of reverse psychology, spur-of-the-moment sound effects, and more to try and convince the princess that the forcefield is still on! TROOPERS – Forcefield [via Geeks are Sexy] Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Add a “Textmate Style” Lightweight Text Editor with Dropbox Syncing to Chrome and Iron Is the Forcefield Really On or Not? [Star Wars Parody Video] Google Updates Picasa Web Albums; Emphasis on Sharing and Showcasing Uwall.tv Turns YouTube into a Video Jukebox Early Morning Sunrise at the Beach Wallpaper Data Networks Visualized via Light Paintings [Video]

    Read the article

  • How should I configure TRIM Support for LVM logical volumes?

    - by Zack Perry
    I am setting up a notebook for software demo purpose. The machine has a Intel Core i7 CPU, 8GB RAM, a 128GB SSD, and runs Ubuntu 12.04 LTS 64bit desktop. As it is, the SSD is configured to have a single volume group, with /boot, /swap, and / all in their respective logical volumes. They collectively consume 30GB space. I plan to use the remaining for logical volumes for KVM guests, all run Ubuntu 12.04 Server I would like to ensure that the SSD is utilized optimally. Although on this site, there are some great info about setting up TRIM support for file system setups that do not involve LVM, I have not found explicit guide regarding my planned setup. I did found this page which talks about adding issue_discards in /etc/lvm/lvm.conf. But in said file on my machine, I didn't find the cited content. I double-checked man lvm.conf(5), didn't see any mentioning of this option either. Thus, I'm not sure what to do. Furthermore, even say adding the option is the right thing to do, should I in my machine's /etc/fstab still add mount options such as noatime etc? Any tips, pointers, and/or further guidance are greatly appreciated.

    Read the article

  • CodePlex Daily Summary for Wednesday, October 10, 2012

    CodePlex Daily Summary for Wednesday, October 10, 2012Popular ReleasesA C# 4.0 Push Notification Helper Library for WP7.0 & WP7.1: Easy Notification 1.0.0: New Feature - Send Tile, Toast & Raw Notifications to Windows Phone Devices. New Feature - Supports Windows Phone 7.0 & Windows Phone 7.1. New Feature - Validation rules are in-built for Push Notification Messages. New Feature - Strongly typed interfaces. New Feature - Supports synchronous & asynchronous methods to send notifications. New Feature - Supports authenticated notifications using X509 Certificates. New Feature - Supports Callback Registration Requests. New Feature - S...D3 Loot Tracker: 1.5.4: Fixed a bug where the server ip was not logged properly in the stats file.Captcha MVC: Captcha Mvc 2.1.2: v 2.1.2: Fixed problem with serialization. Made all classes from a namespace Jetbrains.Annotaions as the internal. Added autocomplete attribute and autocorrect attribute for captcha input element. Minor changes. v 2.1.1: Fixed problem with serialization. Minor changes. v 2.1: Added support for storing captcha in the session or cookie. See the updated example. Updated example. Minor changes. v 2.0.1: Added support for a partial captcha. Now you can easily customize the layout, s...DotNetNuke® Community Edition CMS: 06.02.04: Major Highlights Fixed issue where the module printing function was only visible to administrators Fixed issue where pane level skinning was being assigned to a default container for any content pane Fixed issue when using password aging and FB / Google authentication Fixed issue that was causing the DateEditControl to not load the assigned value Fixed issue that stopped additional profile properties to be displayed in the member directory after modifying the template Fixed er...Online Image Editor: Online Image Editor v1.1: If you have problems with this tool, please email me at amisouvikdas@gmail.com or You can also participate this project to improve this.Advanced DataGridView with Excel-like auto filter: 1.0.0.0: ?????? ??????Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.69: Fix for issue #18766: build task should not build the output if it's newer than all the input files. Fix for Issue #18764: build taks -res switch not working. update build task to concatenate input source and then minify, rather than minify and then concatenate. include resource string-replacement root name in the assumed globals list. Stop replacing new Date().getTime() with +new Date -- the latter is smaller, but turns out it executes up to 45% slower. add CSS support for single-...WinRT XAML Toolkit: WinRT XAML Toolkit - 1.3.3: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features Attachable Behaviors AwaitableUI extensions Controls Converters Debugging helpers Extension methods Imaging helpers IO helpers VisualTree helpers Samples Recent changes NOTE:...DevLib: 69721 binary dll: 69721 binary dllVidCoder: 1.4.4 Beta: Fixed inability to create new presets with "Save As".MCEBuddy 2.x: MCEBuddy 2.3.2: Changelog for 2.3.2 (32bit and 64bit) 1. Added support for generating XBMC XML NFO files for files in the conversion queue (store it along with the source video with source video name.nfo). Right click on the file in queue and select generate XML 2. UI bugifx, start and end trim box locations interchanged 3. Added support for removing commercials from non DVRMS/WTV files (MP4, AVI etc) 4. Now checking for Firewall port status before enabling (might help with some firewall problems) 5. User In...DotNetNuke Boards: 01.00.00: This beta release represents the end of training session 1, based on jQuery and Knockout integration in DotNetNuke 6.2.3. It is designed to allow a single user or multiple users to add/edit/delete cards but no cards can be assigned to anyone at this point. This release is not intended for production use!Sandcastle Help File Builder: SHFB v1.9.5.0 with Visual Studio Package: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This release supports the Sandcastle October 2012 Release (v2.7.1.0). It includes full support for generating, installing, and removing MS Help Viewer files. This new release suppor...ClosedXML - The easy way to OpenXML: ClosedXML 0.68.0: ClosedXML now resolves formulas! Yes it finally happened. If you call cell.Value and it has a formula the library will try to evaluate the formula and give you the result. For example: var wb = new XLWorkbook(); var ws = wb.AddWorksheet("Sheet1"); ws.Cell("A1").SetValue(1).CellBelow().SetValue(1); ws.Cell("B1").SetValue(1).CellBelow().SetValue(1); ws.Cell("C1").FormulaA1 = "\"The total value is: \" & SUM(A1:B2)"; var...Json.NET: Json.NET 4.5 Release 10: New feature - Added Portable build to NuGet package New feature - Added GetValue and TryGetValue with StringComparison to JObject Change - Improved duplicate object reference id error message Fix - Fixed error when comparing empty JObjects Fix - Fixed SecAnnotate warnings Fix - Fixed error when comparing DateTime JValue with a DateTimeOffset JValue Fix - Fixed serializer sometimes not using DateParseHandling setting Fix - Fixed error in JsonWriter.WriteToken when writing a DateT...Readable Passphrase Generator: KeePass Plugin 0.7.2: Changes: Tested against KeePass 2.20.1 Tested under Ubuntu 12.10 (and KeePass 2.20) Added GenerateAsUtf8 method returning the encrypted passphrase as a UTF8 byte array.JSLint for Visual Studio 2010: 1.4.2: 1.4.2patterns & practices: Prism: Prism for .NET 4.5: This is a release does not include any functionality changes over Prism 4.1 Desktop. These assemblies target .NET 4.5. These assemblies also were compiled against updated dependencies: Unity 3.0 and Common Service Locator (Portable Class Library).Snoop, the WPF Spy Utility: Snoop 2.8.0: Snoop 2.8.0Announcing Snoop 2.8.0! It's been exactly six months since the last release, and this one has a bunch of goodies in it. In particular, there is now a PowerShell scripting tab, compliments of Bailey Ling. With this tab, the possibilities are limitless. It basically lets you automate/script the application that you are Snooping. Bailey has a couple blog posts (one and two) on his tab already, and I am sure more is to come. Please note that if you do not have PowerShell installed, y....NET Micro Framework: .NET MF 4.3 (Beta): This is the 4.3 Beta version of the .NET Micro Framework. Feature List for v4.3 Support for Visual Studio 2012 (including the Windows Desktop Express version) All v4.2 QFEs features and bug fixes (PWM enhancements, lwIP and network driver reliability improvements, Analog Output, WinUSB and latest GCC support) Improved diagnostic information for deployment Decreased boot time Bug fixes Work Item 1736 - Create link for MFDeploy under start menu Work Item 1504 - Customizing lwIP o...New Projectsadcc2: adccAP.Framework: This a asp.net mvc3 of test web site .EFCodeFirst: Projeto criado para experimentos e estudos com o Entity FrameworkEXPS-RAT: HelloFiskalizacija za developere: Projekt je namijenjen razvojnim inženjerima, programerima, developerima i svima ostalima koji se bave razvojem programskih rješenja za fiskalne blagajne.Galleriffic App for SharePoint 2013: Galleriffic App is an app part for SharePoint 2013 to display a picture gallery with cool JQuery animations and effects. Hack.net: Hack.net is a Roguelike clone similar to NetHack or Roguelike.Inno Setup For .NET Application: This is a simple inno setup for .net developerJava Special Functions Library: Java Special Functions Library implemented as a public class part of my larger mathematical package.LocalFileOpener: The LocalFileOpener plugin opens an intent to help you easily open local files on the device under installed applications.localizr - .NET Collaborative Translation & Localization: Localizr is a platform for collaborative localization and translation of .NET projects.Metro UI CSS: Metro UI CSS a set of styles to create a site with an interface similar to Windows 8 Metro UI. MoskieDotNet: A sandbox for me learn some new technologies.Mouse Automation: Allows a user to automate repetitive clicking within EverQuestMS SQL DB Schema Updater: Simple tool for updating MS SQL database schema based on "ideal" database model.n8design Tools: Tools any Source Code by Stefan Bauer.NasosCS: ???? ?? ??????Pokemonochan: ALright this is a Pokemon MMo bound to come out someday!PotatoSoft: ?????????????!Power Mirrors: Leverage the usefulness of SQL Server mirrors using PowerShell and SMO. Create mirrors from scratch, assign witnesses and test failovers, all from PowerShell! Project13251010: sdfdReal Time Data Bus: A collection of real time data bus implementations.Ricky Tailor's ASP.NET Web 2.0 Project: 7COM0203 Web Scripting And Content Creation Task 1. SampleTFS2: Sample Projectslotcarduino - An Arduino based slotcar timing project: Slotcar timing project based on Arduino Uno/Mega 2560. Standalone system with serial enabled graphic display.System.Data.Entity.Repository: Entity framework code first framework wrapper with support for generic repository pattern, N-Tier application and Transaction Management for rapid developmentTest Marron: This project is a test to explore codeplexTmib Video Downloader: A small youtube video downloader. Created in C#Web Scripting & Content Creation: Fhame Rashid MSc Software Engineering Module Log: Web-Scripting and Content Creationwebbase: webbaseWebScriptingandContentCreation: This project is for the MSc module Web Scripting and Content Creation.Zuordnung: Zuordnung

    Read the article

  • Oracle Solaris 11 ZFS Lab for Openworld 2012

    - by user12626122
    Preface This is the content from the Oracle Openworld 2012 ZFS lab. It was well attended - the feedback was that it was a little short - thats probably because in writing it I bacame very time-concious after the ASM/ACFS on Solaris extravaganza I ran last year which was almost too long for mortal man to finish in the 1 hour session. Enjoy. Table of Contents Exercise Z.1: ZFS Pools Exercise Z.2: ZFS File Systems Exercise Z.3: ZFS Compression Exercise Z.4: ZFS Deduplication Exercise Z.5: ZFS Encryption Exercise Z.6: Solaris 11 Shadow Migration Introduction This set of exercises is designed to briefly demonstrate new features in Solaris 11 ZFS file system: Deduplication, Encryption and Shadow Migration. Also included is the creation of zpools and zfs file systems - the basic building blocks of the technology, and also Compression which is the compliment of Deduplication. The exercises are just introductions - you are referred to the ZFS Adminstration Manual for further information. From Solaris 11 onward the online manual pages consist of zpool(1M) and zfs(1M) with further feature-specific information in zfs_allow(1M), zfs_encrypt(1M) and zfs_share(1M). The lab is easily carried out in a VirtualBox running Solaris 11 with 6 virtual 3 Gb disks to play with. Exercise Z.1: ZFS Pools Task: You have several disks to use for your new file system. Create a new zpool and a file system within it. Lab: You will check the status of existing zpools, create your own pool and expand it. Your Solaris 11 installation already has a root ZFS pool. It contains the root file system. Check this: root@solaris:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 15.9G 6.62G 9.25G 41% 1.00x ONLINE - root@solaris:~# zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c3t0d0s0 ONLINE 0 0 0 errors: No known data errors Note the disk device the root pool is on - c3t0d0s0 Now you will create your own ZFS pool. First you will check what disks are available: root@solaris:~# echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c3t0d0 <ATA-VBOX HARDDISK-1.0 cyl 2085 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@0,0 1. c3t2d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@2,0 2. c3t3d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@3,0 3. c3t4d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@4,0 4. c3t5d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@5,0 5. c3t6d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@6,0 6. c3t7d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@7,0 Specify disk (enter its number): Specify disk (enter its number): The root disk is numbered 0. The others are free for use. Try creating a simple pool and observe the error message: root@solaris:~# zpool create mypool c3t2d0 c3t3d0 'mypool' successfully created, but with no redundancy; failure of one device will cause loss of the pool So destroy that pool and create a mirrored pool instead: root@solaris:~# zpool destroy mypool root@solaris:~# zpool create mypool mirror c3t2d0 c3t3d0 root@solaris:~# zpool status mypool pool: mypool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 errors: No known data errors Back to topExercise Z.2: ZFS File Systems Task: You have to create file systems for later exercises. You can see that when a pool is created, a file system of the same name is created: root@solaris:~# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 86.5K 2.94G 31K /mypool Create your filesystems and mountpoints as follows: root@solaris:~# zfs create -o mountpoint=/data1 mypool/mydata1 The -o option sets the mount point and automatically creates the necessary directory. root@solaris:~# zfs list mypool/mydata1 NAME USED AVAIL REFER MOUNTPOINT mypool/mydata1 31K 2.94G 31K /data1 Back to top Exercise Z.3: ZFS Compression Task:Try out different forms of compression available in ZFS Lab:Create 2nd filesystem with compression, fill both file systems with the same data, observe results You can see from the zfs(1) manual page that there are several types of compression available to you, set with the property=value syntax: compression=on | off | lzjb | gzip | gzip-N | zle Controls the compression algorithm used for this dataset. The lzjb compression algorithm is optimized for performance while providing decent data compression. Setting compression to on uses the lzjb compression algorithm. The gzip compression algorithm uses the same compression as the gzip(1) command. You can specify the gzip level by using the value gzip-N where N is an integer from 1 (fastest) to 9 (best compression ratio). Currently, gzip is equivalent to gzip-6 (which is also the default for gzip(1)). Create a second filesystem with compression turned on. Note how you set and get your values separately: root@solaris:~# zfs create -o mountpoint=/data2 mypool/mydata2 root@solaris:~# zfs set compression=gzip-9 mypool/mydata2 root@solaris:~# zfs get compression mypool/mydata1 NAME PROPERTY VALUE SOURCE mypool/mydata1 compression off default root@solaris:~# zfs get compression mypool/mydata2 NAME PROPERTY VALUE SOURCE mypool/mydata2 compression gzip-9 local Now you can copy the contents of /usr/lib into both your normal and compressing filesystem and observe the results. Don't forget the dot or period (".") in the find(1) command below: root@solaris:~# cd /usr/lib root@solaris:/usr/lib# find . -print | cpio -pdv /data1 root@solaris:/usr/lib# find . -print | cpio -pdv /data2 The copy into the compressing file system takes longer - as it has to perform the compression but the results show the effect: root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.35G 1.59G 31K /mypool mypool/mydata1 1.01G 1.59G 1.01G /data1 mypool/mydata2 341M 1.59G 341M /data2 Note that the available space in the pool is shared amongst the file systems. This behavior can be modified using quotas and reservations which are not covered in this lab but are covered extensively in the ZFS Administrators Guide. Back to top Exercise Z.4: ZFS Deduplication The deduplication property is used to remove redundant data from a ZFS file system. With the property enabled duplicate data blocks are removed synchronously. The result is that only unique data is stored and common componenents are shared. Task:See how to implement deduplication and its effects Lab: You will create a ZFS file system with deduplication turned on and see if it reduces the amount of physical storage needed when we again fill it with a copy of /usr/lib. root@solaris:/usr/lib# zfs destroy mypool/mydata2 root@solaris:/usr/lib# zfs set dedup=on mypool/mydata1 root@solaris:/usr/lib# rm -rf /data1/* root@solaris:/usr/lib# mkdir /data1/2nd-copy root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02M 2.94G 31K /mypool mypool/mydata1 43K 2.94G 43K /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1 2142768 blocks root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02G 1.99G 31K /mypool mypool/mydata1 1.01G 1.99G 1.01G /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1/2nd-copy 2142768 blocks root@solaris:/usr/lib#zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.99G 1.96G 31K /mypool mypool/mydata1 1.98G 1.96G 1.98G /data1 You could go on creating copies for quite a while...but you get the idea. Note that deduplication and compression can be combined: the compression acts on metadata. Deduplication works across file systems in a pool and there is a zpool-wide property dedupratio: root@solaris:/usr/lib# zpool get dedupratio mypool NAME PROPERTY VALUE SOURCE mypool dedupratio 4.30x - Deduplication can also be checked using "zpool list": root@solaris:/usr/lib# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT mypool 2.98G 1001M 2.01G 32% 4.30x ONLINE - rpool 15.9G 6.66G 9.21G 41% 1.00x ONLINE - Before moving on to the next topic, destroy that dataset and free up some space: root@solaris:~# zfs destroy mypool/mydata1 Back to top Exercise Z.5: ZFS Encryption Task: Encrypt sensitive data. Lab: Explore basic ZFS encryption. This lab only covers the basics of ZFS Encryption. In particular it does not cover various aspects of key management. Please see the ZFS Adminastrion Manual and the zfs_encrypt(1M) manual page for more detail on this functionality. Back to top root@solaris:~# zfs create -o encryption=on mypool/data2 Enter passphrase for 'mypool/data2': ******** Enter again: ******** root@solaris:~# Creation of a descendent dataset shows that encryption is inherited from the parent: root@solaris:~# zfs create mypool/data2/data3 root@solaris:~# zfs get -r encryption,keysource,keystatus,checksum mypool/data2 NAME PROPERTY VALUE SOURCE mypool/data2 encryption on local mypool/data2 keysource passphrase,prompt local mypool/data2 keystatus available - mypool/data2 checksum sha256-mac local mypool/data2/data3 encryption on inherited from mypool/data2 mypool/data2/data3 keysource passphrase,prompt inherited from mypool/data2 mypool/data2/data3 keystatus available - mypool/data2/data3 checksum sha256-mac inherited from mypool/data2 You will find the online manual page zfs_encrypt(1M) contains examples. In particular, if time permits during this lab session you may wish to explore the changing of a key using "zfs key -c mypool/data2". Exercise Z.6: Shadow Migration Shadow Migration allows you to migrate data from an old file system to a new file system while simultaneously allowing access and modification to the new file system during the process. You can use Shadow Migration to migrate a local or remote UFS or ZFS file system to a local file system. Task: You wish to migrate data from one file system (UFS, ZFS, VxFS) to ZFS while mainaining access to it. Lab: Create the infrastructure for shadow migration and transfer one file system into another. First create the file system you want to migrate root@solaris:~# zpool create oldstuff c3t4d0 root@solaris:~# zfs create oldstuff/forgotten Then populate it with some files: root@solaris:~# cd /var/adm root@solaris:/var/adm# find . -print | cpio -pdv /oldstuff/forgotten You need the shadow-migration package installed: root@solaris:~# pkg install shadow-migration Packages to install: 1 Create boot environment: No Create backup boot environment: No Services to change: 1 DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 14/14 0.2/0.2 PHASE ACTIONS Install Phase 39/39 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 You then enable the shadowd service: root@solaris:~# svcadm enable shadowd root@solaris:~# svcs shadowd STATE STIME FMRI online 7:16:09 svc:/system/filesystem/shadowd:default Set the filesystem to be migrated to read-only root@solaris:~# zfs set readonly=on oldstuff/forgotten Create a new zfs file system with the shadow property set to the file system to be migrated: root@solaris:~# zfs create -o shadow=file:///oldstuff/forgotten mypool/remembered Use the shadowstat(1M) command to see the progress of the migration: root@solaris:~# shadowstat EST BYTES BYTES ELAPSED DATASET XFRD LEFT ERRORS TIME mypool/remembered 92.5M - - 00:00:59 mypool/remembered 99.1M 302M - 00:01:09 mypool/remembered 109M 260M - 00:01:19 mypool/remembered 133M 304M - 00:01:29 mypool/remembered 149M 339M - 00:01:39 mypool/remembered 156M 86.4M - 00:01:49 mypool/remembered 156M 8E 29 (completed) Note that if you had created /mypool/remembered as encrypted, this would be the preferred method of encrypting existing data. Similarly for compressing or deduplicating existing data. The procedure for migrating a file system over NFS is similar - see the ZFS Administration manual. That concludes this lab session.

    Read the article

  • Installing mysql-server on 10.04LTS gives "404 Not Found" error

    - by bc1
    Hi I am trying to install mysql on Ubuntu 10.04LTS (Lucid Lynx) and I am getting this error. Is this a server side issue - is the server up? I am running this from the command line on a remote server... sudo apt-get install mysql-server Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libdbd-mysql-perl libdbi-perl libhtml-template-perl libmysqlclient16 libnet-daemon-perl libplrpc-perl mysql-client-5.1 mysql-client-core-5.1 mysql-common mysql-server-5.1 mysql-server-core-5.1 psmisc Suggested packages: dbishell libipc-sharedcache-perl tinyca mailx The following NEW packages will be installed: libdbd-mysql-perl libdbi-perl libhtml-template-perl libmysqlclient16 libnet-daemon-perl libplrpc-perl mysql-client-5.1 mysql-client-core-5.1 mysql-common mysql-server mysql-server-5.1 mysql-server-core-5.1 psmisc 0 upgraded, 13 newly installed, 0 to remove and 85 not upgraded. Need to get 23.2MB/24.3MB of archives. After this operation, 61.7MB of additional disk space will be used. Do you want to continue [Y/n]? Y Err http://archive.ubuntu.com/ubuntu/ lucid-updates/main mysql-common 5.1.62-0ubuntu0.10.04.1 404 Not Found [IP: 91.189.92.192 80] <more of the same error messages here> Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/m/mysql-dfsg-5.1/mysql-common_5.1.62-0ubuntu0.10.04.1_all.deb 404 Not Found [IP: 91.189.92.166 80] <more of the same error messages here> E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?

    Read the article

  • Use a Crayon to Enhance Engraved Lettering on Electronics

    - by ETC
    Whether you’re making a new electronics project or trying to add definition to an old piece, you can use a simple crayon to make etched logos and text pop. At RedToRope, the DIY and project blog of electrical engineering student and tinkering James Williamson, James shares how he used a crayon and a little heat to make the lettering and symbols on his electronics project really pop. It’s an old trick I’ve used many times over the years with firearms but had never thought to use with engraved text on electronics or other devices. You rub the wax into the crevices of the etching, heat the object to melt and level the wax, and then give it a final cleanup buff. Hit up the link below to see the final results of his project as well as all the steps he went through to make the final product look so professional. Laser Engraved, Wax Filled, High Contrast Panels for Electronics Projects [RedToRope via Hacked Gadgets] Latest Features How-To Geek ETC What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Make Efficient Use of Tab Bar Space by Customizing Tab Width in Firefox See the Geeky Work Done Behind the Scenes to Add Sounds to Movies [Video] Use a Crayon to Enhance Engraved Lettering on Electronics Adult Swim Brings Their Programming Lineup to iOS Devices Feel the Chill of the South Atlantic with the Antarctica Theme for Windows 7 Seas0nPass Now Offers Untethered Apple TV Jailbreaking

    Read the article

  • Mission critical embedded language

    - by Moe
    Maybe the question sounds a bit strange, so i'll explain a the background a little bit. Currently i'm working on a project at y university, which will be a complete on-board software for an satellite. The system is programmed in c++ on top of a real-time operating system. However, some subsystems like the attitude control system and the fault detection and a space simulation are currently only implemented in Matlab/Simulink, to prototype the algorithms efficiently. After their verification, they will be translated into c++. The complete on-board software grew very complex, and only a handful people know the whole system. Furthermore, many of the students haven't program in c++ yet and the manual memory management of c++ makes it even more difficult to write mission critical software. Of course the main system has to be implemented in c++, but i asked myself if it's maybe possible to use an embedded language to implement the subsystem which are currently written in Matlab. This embedded language should feature: static/strong typing and compiler checks to minimize runtime errors small memory usage, and relative fast runtime attitude control algorithms are mainly numerical computations, so a good numeric support would be nice maybe some sort of functional programming feature, matlab/simulink encourage you to use it too I googled a bit, but only found Lua. It looks nice, but i would not use it in mission critical software. Have you ever encountered a situation like this, or do you know any language, which could satisfies the conditions? EDIT: To clarify some things: embedded means it should be able to embed the language into the existing c++ environment. So no compiled languages like Ada or Haskell ;)

    Read the article

  • Progress 4GL and DB to Oracle and cloud

    - by llaszews
    Getting from client/server based 4GLs and databases where the 4GL is tightly linked to the database to Oracle and the cloud is not easy. The least risky and expensive option (in the short term) is to use the Progress OpenEdge DataServer for Oracle: Progress OpenEdge DataServer This eliminates the need to have to migrate the Progress 4GL to Java/J2EE. The database can be migrated using SQLWays Ispirer: Ispirer SQLWays ProgressDB migrations tool The Progress 4GL can remain as is. In order to get the application on the cloud there are a few approaches: 1. VDI - Virtual Desktop is a way to put all of the users desktop in a centralized environment off the desktop. This is great in cases where it is just not one client/server application that the user needs access too. In many cases, users will utilize MS Access, MS Excel, Crystal Reports and other tools to get at the Progress DB and other centralized databases. Vmware's acquistion of Wanova shows how VDI is growing in usage. Citrix is the 800 pound gorilla in the VDI space with Citrix WinFrame (now called XenDesktop). Oracle offers a VDI solution that Oracle picked up when it acquired Sun. 2. Hypervisor Server Virtualization - Of course you can place applications written in client/server languages like Progress 4GL buy using server virtualization from Oracle, VMWare, Microsoft, Citrix and others. 3. Microsoft Remote Desktop Services (aka: Terminal Services Client) The entire idea is to eliminate all the client/server desktop devices and connections which require desktop software and database drivers. A solution to removing database drivers from the desktop is to use DataDirect SQLLink

    Read the article

  • Facebook PHP SDK and Wordpress Error

    - by Gecko
    I have a developer environment setup with WAMP, Wordpress, and PHPEdit IDE. I use the Facebook, Twitter, and YouTube API's in a sidebar. I'm using Facebook's PHP SDK to display information(no login or admin functions). Since the FB SDK and WP use session_start() I get the following warning: Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent (output started at C:\wamp\www\dfi\wp-content\themes\DFI\header.php:12) in C:\wamp\www\dfi\wp-content\themes\DFI\api\facebook.php on line 36 I'm trying to figure this out by using the warning output but it doesn't help considering the following. I know about clearing white space and characters before and after <?php ?> and placing session_start() before any http output. I use unix line enders and UTF8 encoding without BOM. My host server is not set up for output_buffering. header.php line 11 to 13 11 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> 12 <html xmlns="http://www.w3.org/1999/xhtml" <?php language_attributes();?>> 13 <head> It looks like the warning comes from inline php code. I don't know what I can do to fix this line. facebook.php line 34 to 37 34 public function __construct($config) { 35 if (!session_id()) { 36 session_start(); 37 } I don't think I can stop either FB or WP from calling session_start() without breaking everything. How do I make Wordpress and Facebook play nicely together without this error?

    Read the article

  • Trace flags - TF 1117

    - by Damian
    I had a session about trace flags this year on the SQL Day 2014 conference that was held in Wroclaw at the end of April. The session topic is important to most of DBA's and the reason I did it was that I sometimes forget about various trace flags :). So I decided to prepare a presentation but I think it is a good idea to write posts about trace flags, too. Let's start then - today I will describe the TF 1117. I assume that we all know how to setup a TF using starting parameters or registry or in the session or on the query level. I will always write if a trace flag is local or global to make sure we know how to use it. Why do we need this trace flag? Let’s create a test database first. This is quite ordinary database as it has two data files (4 MB each) and a log file that has 1MB. The data files are able to expand by 1 MB and the log file grows by 10%: USE [master] GO CREATE DATABASE [TF1117]  ON  PRIMARY ( NAME = N'TF1117',      FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\TF1117.mdf' ,      SIZE = 4096KB ,      MAXSIZE = UNLIMITED,      FILEGROWTH = 1024KB ), ( NAME = N'TF1117_1',      FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\TF1117_1.ndf' ,      SIZE = 4096KB ,      MAXSIZE = UNLIMITED,      FILEGROWTH = 1024KB )  LOG ON ( NAME = N'TF1117_log',      FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\TF1117_log.ldf' ,      SIZE = 1024KB ,      MAXSIZE = 2048GB ,      FILEGROWTH = 10% ) GO Without the TF 1117 turned on the data files don’t grow all up at once. When a first file is full the SQL Server expands it but the other file is not expanded until is full. Why is that so important? The SQL Server proportional fill algorithm will direct new extent allocations to the file with the most available space so new extents will be written to the file that was just expanded. When the TF 1117 is enabled it will cause all files to auto grow by their specified increment. That means all files will have the same percent of free space so we still have the benefit of evenly distributed IO. The TF 1117 is global flag so it affects all databases on the instance. Of course if a filegroup contains only one file the TF does not have any effect on it. Now let’s do a simple test. First let’s create a table in which every row will fit to a single page: The table definition is pretty simple as it has two integer columns and one character column of fixed size 8000 bytes: create table TF1117Tab (      col1 int,      col2 int,      col3 char (8000) ) go Now I load some data to the table to make sure that one of the data file must grow: declare @i int select @i = 1 while (@i < 800) begin       insert into TF1117Tab  values (@i, @i+1000, 'hello')        select @i= @i + 1 end I can check the actual file size in the sys.database_files DMV: SELECT name, (size*8)/1024 'Size in MB' FROM sys.database_files  GO   As you can see only the first data file was  expanded and the other has still the initial size:   name                  Size in MB --------------------- ----------- TF1117                5 TF1117_log            1 TF1117_1              4 There is also other methods of looking at the events of file autogrows. One possibility is to create an Extended Events session and the other is to look into the default trace file:     DECLARE @path NVARCHAR(260); SELECT    @path = REVERSE(SUBSTRING(REVERSE([path]),          CHARINDEX('\', REVERSE([path])), 260)) + N'log.trc' FROM    sys.traces WHERE   is_default = 1; SELECT    DatabaseName,                 [FileName],                 SPID,                 Duration,                 StartTime,                 EndTime,                 FileType =                         CASE EventClass                                     WHEN 92 THEN 'Data'                                    WHEN 93 THEN 'Log'             END FROM sys.fn_trace_gettable(@path, DEFAULT) WHERE   EventClass IN (92,93) AND StartTime >'2014-07-12' AND DatabaseName = N'TF1117' ORDER BY   StartTime DESC;   After running the query I can see the file was expanded and how long did the process take which might be useful from the performance perspective.    Now it’s time to turn on the flag 1117. DBCC TRACEON(1117)   I dropped the database and recreated it once again. Then I ran the queries and observed the results. After loading the records I see that both files were evenly expanded: name                  Size in MB --------------------- ----------- TF1117                5 TF1117_log            1 TF1117_1              5 I found also information in the default trace. The query returned three rows. The last one is connected to my first experiment when the TF was turned off.  The two rows shows that first file was expanded by 1MB and right after that operation the second file was expanded, too. This is what is this TF all about J  

    Read the article

  • Emacs: methods for debugging python

    - by Tom Willis
    I use emacs for all my code edit needs. Typically, I will use M-x compile to run my test runner which I would say gets me about 70% of what I need to do to keep the code on track however lately I've been wondering how it might be possible to use M-x pdb on occasions where it would be nice to hit a breakpoint and inspect things. In my googling I've found some things that suggest that this is useful/possible. However I have not managed to get it working in a way that I fully understand. I don't know if it's the combination of buildout + appengine that might be making it more difficult but when I try to do something like M-x pdb Run pdb (like this): /Users/twillis/projects/hydrant/bin/python /Users/twillis/bin/pdb /Users/twillis/projects/hydrant/bin/devappserver /Users/twillis/projects/hydrant/parts/hydrant-app/ Where .../bin/python is the interpreter buildout makes with the path set for all the eggs. ~/bin/pdb is a simple script to call into pdb.main using the current python interpreter HellooKitty:hydrant twillis$ cat ~/bin/pdb #! /usr/bin/env python if __name__ == "__main__": import sys sys.version_info import pdb pdb.main() HellooKitty:hydrant twillis$ .../bin/devappserver is the dev_appserver script that the buildout recipe makes for gae project and .../parts/hydrant-app is the path to the app.yaml I am first presented with a prompt Current directory is /Users/twillis/bin/ C-c C-f Nothing happens but HellooKitty:hydrant twillis$ ps aux | grep pdb twillis 469 100.0 1.6 168488 67188 s002 Rs+ 1:03PM 0:52.19 /usr/local/bin/python2.5 /Users/twillis/projects/hydrant/bin/python /Users/twillis/bin/pdb /Users/twillis/projects/hydrant/bin/devappserver /Users/twillis/projects/hydrant/parts/hydrant-app/ twillis 477 0.0 0.0 2435120 420 s000 R+ 1:05PM 0:00.00 grep pdb HellooKitty:hydrant twillis$ something is happening C-x [space] will report that a breakpoint has been set. But I can't manage to get get things going. It feels like I am missing something obvious here. Am I? So, is interactive debugging in emacs worthwhile? is interactive debugging a google appengine app possible? Any suggestions on how I might get this working?

    Read the article

  • What is a good way to store tilemap data?

    - by Stephen Tierney
    I'm developing a 2D platformer with some uni friends. We've based it upon the XNA Platformer Starter Kit which uses .txt files to store the tile map. While this is simple it does not give us enough control and flexibility with level design. Some examples: for multiple layers of content multiple files are required, each object is fixed onto the grid, doesn't allow for rotation of objects, limited number of characters etc. So I'm doing some research into how to store the level data and map file. This concerns only the file system storage of the tile maps, not the data structure to be used by the game while it is running. The tile map is loaded into a 2D array, so this question is about which source to fill the array from. Reasoning for DB: From my perspective I see less redundancy of data using a database to store the tile data. Tiles in the same x,y position with the same characteristics can be reused from level to level. It seems like it would simple enough to write a method to retrieve all the tiles that are used in a particular level from the database. Reasoning for JSON/XML: Visually editable files, changes can be tracked via SVN a lot easier. But there is repeated content. Do either have any drawbacks (load times, access times, memory etc) compared to the other? And what is commonly used in the industry? Currently the file looks like this: .................... .................... .................... .................... .................... .................... .................... .........GGG........ .........###........ .................... ....GGG.......GGG... ....###.......###... .................... .1................X. #################### 1 - Player start point, X - Level Exit, . - Empty space, # - Platform, G - Gem

    Read the article

  • Fast Society Creates Mini and Mobile Temporary Social Networks

    - by ETC
    You’re out on the town or at a convention with a bunch of friends. How do you keep in touch with the entire group simultaneously? Fast Society offers a smartphone-based solution: a temporary social network for group talking, texting, and more. Fast Society was originally an iPhone only application and has recently updated to include and Android app too. The premise is simple: You set up a Fast Society group, link your friends into it, and for that night (or convention weekend) you’re all part of the same mini group. You can text the entire group, share pictures, set up sub-groups (let’s say that half your group is going to stay up late and party while half need to hit the rack to get up early for presentations, you can create a new group for the night owls to communicate), share your location, and send in-app and SMS messages to the entire group. Check out the video above to see it in action or hit up the link below to read more and grab a copy. Face Society [via Mashable] Latest Features How-To Geek ETC How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? Peaceful Alpine River on a Sunny Day [Wallpaper] Fast Society Creates Mini and Mobile Temporary Social Networks Page Zipper Unpacks Multi-Page Articles for Single-Page Display Minty Bug: Build an FM Bug Inside a Mint Container Get the MakeUseOf eBook Guide to Hacker Proofing Your PC Sync Your Windows Computer with Your Ubuntu One Account [Desktop Client]

    Read the article

  • Cannot install mysql-server on ubuntu 11.10 by either synaptic or terminal

    - by roopunk
    I've been facing this problem with installing some other packages too. Inside terminal I type: sudo apt-get install mysql-server and I get the following: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libhtml-template-perl mysql-server-5.1 mysql-server-core-5.1 Suggested packages: libipc-sharedcache-perl tinyca mailx The following NEW packages will be installed: libhtml-template-perl mysql-server mysql-server-5.1 mysql-server-core-5.1 0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded. Need to get 6,260 kB/11.0 MB of archives. After this operation, 25.8 MB of additional disk space will be used. Do you want to continue [Y/n]? Y Err http://ubuntu.oss.eznetsols.org/ubuntu/ oneiric-updates/main mysql-server-5.1 i386 5.1.63-0ubuntu0.11.10.1 Connection failed Err http://ubuntu.oss.eznetsols.org/ubuntu/ oneiric-security/main mysql-server-5.1 i386 5.1.63-0ubuntu0.11.10.1 Connection failed Failed to fetch http://ubuntu.oss.eznetsols.org/ubuntu/pool/main/m/mysql-5.1/mysql-server-5.1_5.1.63-0ubuntu0.11.10.1_i386.deb Connection failed E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? On trying to install with synaptic package manager, I get the following screenshot I am bascially trying to setup LAMP. Any help/suggestion would be great. Thanks.

    Read the article

  • How to Install a Wireless Card in Linux Using Windows Drivers

    - by Justin Garrison
    Linux has come a long way with hardware support, but if you have a wireless card that still does not have native Linux drivers you might be able to get the card working with a Windows driver and ndiswrapper. Using a Windows driver inside of Linux may also give you faster transfer rates or better encryption support depending on your wireless card. If your wireless card is working, it is not recommended to install the Windows driver just for fun because it could cause a conflict with the native Linux driver Latest Features How-To Geek ETC How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Smart Taskbar Is a Thumb Friendly Android Task Launcher Comix is an Awesome Comics Archive Viewer for Linux Get the MakeUseOf eBook Guide to Speeding Up Windows for Free Need Tech Support? Call the Star Wars Help Desk! [Video Classic] Reclaim Vertical UI Space by Adding a Toolbar to the Left or Right Side of Firefox Androidify Turns You into an Android-style Avatar

    Read the article

  • How to deal with a 'public' work environment?

    - by Craige
    In the last 6 months, I have changed desks at my office 4 times. I don't mind, as it's due to expansion of the company and acquiring new office space and getting everybody settled. However, I truly miss the semi-private office I sat in 2 desks ago. I am now sitting in a large room with a number of other people. My problem with this isn't with my co-workers; everybody here is great. My problem is that based on the configuration of the room, no matter which desk I sit in, my monitors WILL be facing an open window. This causes a glare on my monitors, and it drives me crazy. I prefer a dark IDE theme as I find it easier on the eyes, however this just makes the glare that much worse. How should programmers cope with public office settings? Secondly, how should I cope with my specific problem? Should I give in and adopt a light IDE theme that will reduce the visibility of the glare but increase eye strain, or should I stick to my guns and find another solution?

    Read the article

  • How to resolve - dpkg error : old pre-removal script returned error exit status 102

    - by Siva Prasad Varma
    I am unable to install or remove a package on my Ubuntu 10.04 due to the following error. $ sudo apt-get autoremove Password: Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: busybox 0 upgraded, 0 newly installed, 1 to remove and 9 not upgraded. 1 not fully installed or removed. Need to get 0B/212kB of archives. After this operation, 627kB disk space will be freed. Do you want to continue [Y/n]? y Selecting previously deselected package nscd. (Reading database ... 235651 files and directories currently installed.) Preparing to replace nscd 2.11.1-0ubuntu7.8 (using .../nscd_2.11.1-0ubuntu7.8_amd64.deb) ... invoke-rc.d: not a symlink: /etc/rc2.d/S76nscd dpkg: warning: old pre-removal script returned error exit status 102 dpkg - trying script from the new package instead ... invoke-rc.d: not a symlink: /etc/rc2.d/S76nscd dpkg: error processing /var/cache/apt/archives/nscd_2.11.1-0ubuntu7.8_amd64.deb (--unpack): subprocess new pre-removal script returned error exit status 102 update-rc.d: warning: /etc/rc2.d/S76nscd is not a symbolic link invoke-rc.d: not a symlink: /etc/rc2.d/S76nscd dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 102 Errors were encountered while processing: /var/cache/apt/archives/nscd_2.11.1-0ubuntu7.8_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) What should I do to resolve this error? I have tried sudo dpkg --remove --force-remove-reinstreq nscd but it did not work.

    Read the article

  • Turn a Green Laser into a Microscope Projector [Science]

    - by ETC
    No need to settle for squinting over a microscope, hack a green laser and a cheap webcam lens into a projector. Check out the video below to see the projector in action. Over at Hackteria, user Dusjagr shares a clever tutorial on how to hack a laser and some odd parts into a projector intended for microscopic subjects. With a strong enough light source, a properly spaced lens assembly, and a slide or petri dish filled with something of interest, you can create microscope projector. Curious how it would work and what the results would look like? Check out this demonstration video: That wiggling creature you see in the video is a nematode from a sample of pond water. For more information on building your own laser-scope projector, hit up the link below. DIY Laser Microscope [Hackteria via Make] Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Awesome 10 Meter Curved Touchscreen at the University of Groningen [Video] TV Antenna Helper Makes HDTV Antenna Calibration a Snap Turn a Green Laser into a Microscope Projector [Science] The Open Road Awaits [Wallpaper] N64oid Brings N64 Emulation to Android Devices Super-Charge GIMP’s Image Editing Capabilities with G’MIC [Cross-Platform]

    Read the article

  • My old Conchango blog posts are currently not accessible

    - by jamiet
    Some of you reading this may be aware that I used to blog at http://blogs.conchango.com/jamiethomson. That URL later changed to http://consultingblogs.emc.com/jamiethomson after Conchango (my employer) got taken over by EMC. In my last post on that site: I stated that I had 676 blog posts on that site. Unfortunately, as of today, those 676 posts are inaccessible. If you try to get to http://consultingblogs.emc.com/jamiethomson today you will see this: I am not the only one affected either; it seems that EMC have taken the same action for many blog sites of my old colleagues (e.g. http://consultingblogs.emc.com/merrickchaffer is also inaccessible). Early indications are that EMC have removed all blog posts by any former employees although that is yet to be confirmed.   A few of us former employees are endeavouring to get this situation rectified so watch this space. I am aware that many people in the SSIS community still refer to those old blog posts so please be aware that any attempt to access any of them will be futile for the foreseeable future. @Jamiet

    Read the article

  • CSS and HTML incoherences when declaring multiple classes

    - by Cesco
    I'm learning CSS "seriously" for the first time, but I found the way you deal with multiple CSS classes in CSS and HTML quite incoherent. For example I learned that if I want to declare multiple CSS classes with a common style applied to them, I have to write: .style1, .style2, .style3 { color: red; } Then, if I have to declare an HTML tag that has multiple classes applied to it, I have to write: <div class="style1 style2 style3"></div> And I'm asking why? From my personal point of view it would be more coherent if both could be declared by using a comma to separate each class, or if both could be declared using a space; after all IMHO we're still talking about multiple classes, in both CSS and HTML. I think that it would make more sense if I could write this to declare a div with multiple classes applied: <div class="style1, style2, style3"></div> Am I'm missing something important? Could you explain me if there's a valid reason behind these two different syntaxes?

    Read the article

  • Problem after mysql-server installation I cant install any thing in ubuntu 12.04.1 now

    - by mohammed ezzi
    I'm not an advanced user of Linux and I tried to install work with database so I installed Mysql-server, I think I did same thing wrong so I get in trouble and now I cant install any thing and this what I get when I use apt-get -f install : root@me:~# apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: mysql-server mysql-server-5.5 Suggested packages: tinyca mailx The following packages will be upgraded: mysql-server mysql-server-5.5 2 upgraded, 0 newly installed, 0 to remove and 194 not upgraded. 2 not fully installed or removed. Need to get 0 B/8,737 kB of archives. After this operation, 15.4 kB of additional disk space will be used. Do you want to continue [Y/n]? y dpkg: dependency problems prevent configuration of mysql-server-5.5: mysql-server-5.5 depends on mysql-server-core-5.5 (= 5.5.24-0ubuntu0.12.04.1); however: Version of mysql-server-core-5.5 on system is 5.5.28-0ubuntu0.12.04.3. dpkg: error processing mysql-server-5.5 (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: mysql-server-5.5 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) I tried to remove mysql-server but nothing happened.

    Read the article

  • 50 Years of LEDs: An Interview with Inventor Nick Holonyak [Video]

    - by Jason Fitzpatrick
    The man who powered on the first LED half a century ago is still around to talk about it; read on to watch an interview with LED inventor Nick Holonyak. The most fascinating thing about Holonyak’s journey to the invention of the LED was that he started off trying to build a laser and ended up inventing a super efficient light source: Holonyak got his PhD in 1954. In 1957, after a year at Bell Labs and a two year stint in the Army, he joined GE’s research lab in Syracuse, New York. GE was already exploring semiconductor applications and building the forerunners of modern diodes called thyristors and rectifiers. At a GE lab in Schenectady, the scientist Robert Hall was trying to build the first diode laser. Hall, Holonyak and others noticed that semiconductors emit radiation, including visible light, when electricity flows through them. Holonyak and Hall were trying to “turn them on,” and channel, focus and multiply the light. Hall was the first to succeed. He built the world’s first semiconductor laser. Without it, there would be no CD and DVD players today. “Nobody knew how to turn the semiconductor into the laser,” Holonyak says. “We arrived at the answer before anyone else.” But Hall’s laser emitted only invisible, infrared light. Holonyak spent more time in his lab, testing, cutting and polishing his hand-made semiconducting alloys. In the fall of 1962, he got first light. “People thought that alloys were rough and turgid and lumpy,” he says. “We knew damn well what happened and that we had a very powerful way of converting electrical current directly into light. We had the ultimate lamp.” How To Get a Better Wireless Signal and Reduce Wireless Network Interference How To Troubleshoot Internet Connection Problems 7 Ways To Free Up Hard Disk Space On Windows

    Read the article

< Previous Page | 625 626 627 628 629 630 631 632 633 634 635 636  | Next Page >