Search Results

Search found 5205 results on 209 pages for 'extra'.

Page 130/209 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • No GRUB Screen or recovery mode on Boot after 12.04 Upgrade

    - by Nick
    I tried the live boot CD and boot-repair, also loaded the Desktop install CD, and it looks like all partitions check out OK. However, when I try to boot Linux (the only bootable partition on the computer) I get a blank screen. Every so often the screen give me something akin to: Assuming write through cache Asking for cache data failed it appears to start booting, then hangs. Ctrl+Alt+Delete shuts down the machine The last message during boot is "STarting TiMidity++ ALSA midi emulation... [OK]" I used boot-repair to generate a boot info report. One thing looks odd to me- it reports a missing core.img on /dev/sda1. Here is the full info: Boot Info Script 0.61.full + Boot-Repair extra info [Boot-Info August 2nd 2012] ============================= Boot Info Summary: =============================== = Grub2 (v1.99) is installed in the MBR of /dev/sda and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos1)/boot/grub on this drive. = Windows is installed in the MBR of /dev/sdb. sda1: __________________________________________ File system: ext4 Boot sector type: Grub2 (v1.99) Boot sector info: Grub2 (v1.99) is installed in the boot sector of sda1 and looks at sector 18406911 of the same hard drive for core.img, but core.img can not be found at this location. Operating System: Ubuntu 12.04.1 LTS Boot files: /boot/grub/grub.cfg /etc/fstab /boot/extlinux/extlinux.conf /boot/grub/core.img sda2: __________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sda5: __________________________________________ File system: swap Boot sector type: - Boot sector info: sdb1: __________________________________________ File system: ntfs Boot sector type: Windows XP: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: ============================ Drive/Partition Info: ============================= Drive: sda _______________________________________ Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 * 63 307,339,514 307,339,452 83 Linux /dev/sda2 307,339,515 312,576,704 5,237,190 5 Extended /dev/sda5 307,339,578 312,576,704 5,237,127 82 Linux swap / Solaris Drive: sdb _______________________________________ Disk /dev/sdb: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sdb1 2,048 625,142,447 625,140,400 7 NTFS / exFAT / HPFS "blkid" output: ____________________________________ Device UUID TYPE LABEL /dev/loop0 squashfs /dev/sda1 11b4d633-7863-40b2-a6ca-da5f82c3ad0b ext4 /dev/sda5 cb8d65f4-8cf9-4088-b804-e3dea2151033 swap /dev/sdb1 349E7C109E7BC8BE ntfs Personal1 ================================ Mount points: ================================= Device Mount_Point Type Options /dev/sdb1 /media/Personal1 fuseblk (rw,nosuid,nodev,allow_other,blksize=4096,default_permissions) /dev/sr0 /live/image iso9660 (ro,noatime) ...(a bunch of config file info- let me know if anyone wants to see it!) But usually I just get "Cannot Display This Video Mode", which I know means the video output is not usable by the monitor. I'm looking for a way to get into a recovery mode.I'd really like to avoid wiping the drive. Any thoughts?

    Read the article

  • Function Folding in #PowerQuery

    - by Darren Gosbell
    Originally posted on: http://geekswithblogs.net/darrengosbell/archive/2014/05/16/function-folding-in-powerquery.aspxLooking at a typical Power Query query you will noticed that it's made up of a number of small steps. As an example take a look at the query I did in my previous post about joining a fact table to a slowly changing dimension. It was roughly built up of the following steps: Get all records from the fact table Get all records from the dimension table do an outer join between these two tables on the business key (resulting in an increase in the row count as there are multiple records in the dimension table for each business key) Filter out the excess rows introduced in step 3 remove extra columns that are not required in the final result set. If Power Query was to execute a query like this literally, following the same steps in the same order it would not be overly efficient. Particularly if your two source tables were quite large. However Power Query has a feature called function folding where it can take a number of these small steps and push them down to the data source. The degree of function folding that can be performed depends on the data source, As you might expect, relational data sources like SQL Server, Oracle and Teradata support folding, but so do some of the other sources like OData, Exchange and Active Directory. To explore how this works I took the data from my previous post and loaded it into a SQL database. Then I converted my Power Query expression to source it's data from that database. Below is the resulting Power Query which I edited by hand so that the whole thing can be shown in a single expression: let     SqlSource = Sql.Database("localhost", "PowerQueryTest"),     BU = SqlSource{[Schema="dbo",Item="BU"]}[Data],     Fact = SqlSource{[Schema="dbo",Item="fact"]}[Data],     Source = Table.NestedJoin(Fact,{"BU_Code"},BU,{"BU_Code"},"NewColumn"),     LeftJoin = Table.ExpandTableColumn(Source, "NewColumn"                                   , {"BU_Key", "StartDate", "EndDate"}                                   , {"BU_Key", "StartDate", "EndDate"}),     BetweenFilter = Table.SelectRows(LeftJoin, each (([Date] >= [StartDate]) and ([Date] <= [EndDate])) ),     RemovedColumns = Table.RemoveColumns(BetweenFilter,{"StartDate", "EndDate"}) in     RemovedColumns If the above query was run step by step in a literal fashion you would expect it to run two queries against the SQL database doing "SELECT * …" from both tables. However a profiler trace shows just the following single SQL query: select [_].[BU_Code],     [_].[Date],     [_].[Amount],     [_].[BU_Key] from (     select [$Outer].[BU_Code],         [$Outer].[Date],         [$Outer].[Amount],         [$Inner].[BU_Key],         [$Inner].[StartDate],         [$Inner].[EndDate]     from [dbo].[fact] as [$Outer]     left outer join     (         select [_].[BU_Key] as [BU_Key],             [_].[BU_Code] as [BU_Code2],             [_].[BU_Name] as [BU_Name],             [_].[StartDate] as [StartDate],             [_].[EndDate] as [EndDate]         from [dbo].[BU] as [_]     ) as [$Inner] on ([$Outer].[BU_Code] = [$Inner].[BU_Code2] or [$Outer].[BU_Code] is null and [$Inner].[BU_Code2] is null) ) as [_] where [_].[Date] >= [_].[StartDate] and [_].[Date] <= [_].[EndDate] The resulting query is a little strange, you can probably tell that it was generated programmatically. But if you look closely you'll notice that every single part of the Power Query formula has been pushed down to SQL Server. Power Query itself ends up just constructing the query and passing the results back to Excel, it does not do any of the data transformation steps itself. So now you can feel a bit more comfortable showing Power Query to your less technical Colleagues knowing that the tool will do it's best fold all the  small steps in Power Query down the most efficient query that it can against the source systems.

    Read the article

  • Install 32-bit gstreamer plugins on 64-bit

    - by Rua
    I am trying to install the 32-bit gstreamer plugins on my 64-bit system (Ubuntu 12.10 based). I can install the packages gstreamer0.10-plugins-base:i386 and gstreamer0.10-plugins-good:i386. However, gstreamer0.10-plugins-bad:i386, gstreamer0.10-plugins-bad-multiverse:i386 and gstreamer0.10-plugins-ugly:i386 conflict with 64-bit packages already installed on my system: $ sudo apt-get install gstreamer0.10-plugins-bad:i386 Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: gstreamer0.10-plugins-bad:i386 : Depends: libass4:i386 (>= 0.9.7) but it is not going to be installed Depends: libdca0:i386 but it is not going to be installed Depends: libdvdnav4:i386 (>= 4.2.0+20120524) but it is not going to be installed Depends: libdvdread4:i386 but it is not going to be installed Depends: libslv2-9:i386 (>= 0.6.4-1~) but it is not going to be installed E: Unable to correct problems, you have held broken packages. ... $ sudo apt-get install gstreamer0.10-plugins-bad-multiverse:i386 Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libavcodec53:i386 libavutil51:i386 libfaac0:i386 libfaad2:i386 libgsm1:i386 libmjpegtools-1.9:i386 libmp3lame0:i386 libquicktime2:i386 libschroedinger-1.0-0:i386 libswscale2:i386 libva1:i386 libvpx1:i386 libx264-123:i386 libxvidcore4:i386 The following packages will be REMOVED: gstreamer0.10-plugins-bad-multiverse libfaac0 libmjpegtools-1.9 mint-meta-codecs The following NEW packages will be installed: gstreamer0.10-plugins-bad-multiverse:i386 libavcodec53:i386 libavutil51:i386 libfaac0:i386 libfaad2:i386 libgsm1:i386 libmjpegtools-1.9:i386 libmp3lame0:i386 libquicktime2:i386 libschroedinger-1.0-0:i386 libswscale2:i386 libva1:i386 libvpx1:i386 libx264-123:i386 libxvidcore4:i386 0 upgraded, 15 newly installed, 4 to remove and 5 not upgraded. Need to get 9,198 kB of archives. After this operation, 23.3 MB of additional disk space will be used. Do you want to continue [Y/n]? ... $ sudo apt-get install gstreamer0.10-plugins-ugly:i386 Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: gstreamer0.10-plugins-ugly:i386 : Depends: libdvdread4:i386 but it is not going to be installed E: Unable to correct problems, you have held broken packages. This means that I can't play mp3s (among other things) in 32-bit applications that use gstreamer. Is there a way around this?

    Read the article

  • NEW: Oracle Certification Exam Preparation Seminars

    - by Harold Green
    Hi Everyone, I am really excited about a new offering that we are announcing this week - Oracle Certification Exam Preparation Seminars. These are something that will make a big difference for many of you in your efforts to become certified and move your career forward. They are also something that have previously only been available (but very popular) to the limited number of customers who have attended our annual conferences in San Francisco (Oracle OpenWorld and JavaOne). These are the first in a series of offerings that we are releasing over the next few months. So for those of you either preparing or considering Oracle certification - keep watching here on the blog, Facebook, Twitter and the Oracle Certification website for additional announcements related to our most popular certification areas. Details of the new Exam Preparation Seminars are found below: NEW: ORACLE CERTIFICATION EXAM PREPARATION SEMINARS Becoming Oracle certified is a great way to build your career, gain additional credibility and improve your earning power. We know that the decision to become certified is not trivial. Our surveys indicate that people consider their time investment a critical factor in their decision to become certified. Your time is important. In order to help candidates maximize the efficiency of their study time we are releasing a new series of video-based seminars called Exam Preparation Seminars. These seminars are patterned after the extremely popular Exam Cram sessions that until now have only been available at our annual customer conferences (Oracle Open World and JavaOne). Beginning today they are now available to anyone, anywhere as a part of this Exam Prep Seminar series. Features: Fast-paced objective by objective review of the exam topics - led by top Oracle University instructors 24/7 access through Oracle University's training on demand platform. Ability to re-watch all or part of the the seminar. All the conveniences of video-based training: start, stop, fast-forward, skip, rewind, review. Tips that will help you better understand what you need to know to pass the exam. The Exam Preparation Seminars are meant to help anyone with a working knowledge of the technology get that extra boost to help them finalize their preparation, and will help anyone who wants a better understanding of the the depth and breadth of the exam topics and objectives. Benefits: Save time by understanding what you should study. Makes you efficient because you will understand the breadth and depth of each of the exam topics. Helps you create a better, more efficient study plan. Improves your confidence in your skills and ability to pass the certification exam. Exam Preparation Seminars are available individually, or in convenient Value Packages (which include the Exam Preparation Seminar, and an exam voucher which includes one free-retake if you need it). Currently we are releasing two seminars - one for DBA SQL and one for DBA Administration I. Additional offerings are in process. Find out more: General WEB: Oracle Certification Exam Preparation Seminars VIDEO: Exam Preparation Seminars Promo (1:27) Oracle Database Administration I (11g, 10g) VIDEO: Instructor Introduction (1:08) VIDEO: Sample Video (2:16) Oracle Database SQL VIDEO: Instructor Introduction (1:08) VIDEO: Sample Video (2:16)

    Read the article

  • TouchDevelop: The Fast Path to Windows 8 and Phone Apps

    - by Clint Edmonson
    Are you looking for a little extra cash for the upcoming holidays? Then you might be interested in creating some cool apps to sell in the Windows Store. Or maybe you’re simply curious and want to try your hand at developing for Windows 8 and Windows Phone. In either case, the newly released TouchDevelop Web App is for you. TouchDevelop Web App is a development environment to create apps on your tablet or smartphone, without requiring a separate PC. Scripts written by using TouchDevelop can access data, media, and sensors on the phone, tablet, and PC. The script can interact with cloud services, including storage, computing, and social networks. TouchDevelop lets you quickly create fun games and useful tools, turning your scripts into true Windows Phone and Windows 8 apps. A year ago, Microsoft Research released TouchDevelop for Windows Phone, which is being used by enthusiasts, students, and researchers to program their phones in fun, inventive, and interesting ways. These scripts are available at TouchDevelop for anyone to download and use. Ever since we released TouchDevelop, we’ve been eyeing the tablet form factor and working on a version for the browser. Now, with the release of TouchDevelop Web App, the wait is over: the tablet version is ready, so go play around with it. All TouchDevelop scripts that are developed on the smartphone can be downloaded to the tablet and run (if hardware allows). Any script that is developed on the tablet can also be accessed on the phone. And scripts can be converted to Windows Phone or Windows 8 apps and submitted to the Windows Phone Store or Windows Store, respectively. TouchDevelop Web App’s editor and programming language have been designed for tablet devices with touchscreens, but you can also use a keyboard and a mouse. So grab your web-enabled device and give the TouchDevelop Web App a try. It’s fun and easy, and could even put a little cash in your holiday-depleted wallet. Or at least give you bragging rights at family get-togethers. Are you interested in further tips on Windows 8 development?  Sign up for the 30 to launch program which will help you build a Windows Store application in 30 days.  You will receive a tip per day for 30 days, along with potential free design consultations and technical support from a Windows 8 expert. As always, stay tuned to my twitter feed for Windows 8, Windows Azure and other Microsoft announcements, updates, and links: @clinted

    Read the article

  • Why would you dual-run an app on Azure and AWS?

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/11/10/why-would-you-dual-run-an-app-on-azure-and-aws.aspxI had this question from a viewer of my Pluralsight course, Implementing the Reactive Manifesto with Azure and AWS, and thought I’d publish the response. So why would you dual-run your cloud app by hosting it on Azure and AWS? Sounds like a lot of extra development and management overhead. Well the most compelling reasons are reliability and portability. In 2012 I was working for a client who was making a big investment in the cloud, and at the end of the year we published their first external API for business partners. It was hosted in Azure and used some really nice features to route back into existing on-premise services. We were able to publish a clean, simple API to partners, and hide away the underlying complexity of the internal services while still leveraging them to do all the work. Two days after we went live, we were hit by the Azure SSL certificate expiry outage, and our API was unavailable for the best part of 3 days. Fortunately we had planned a gradual roll-out to partners, so the impact was minimal, but we’d been intending to ramp up quickly, and if the outage had happened a week or two later we would have been in a very bad place. Not least because our app could only run on Azure, we couldn’t package it up for another service without going back and reworking the code. More recently AWS had an issue with a networking device in one of their data centres which caused an outage that took the best part of a day to resolve. In both scenarios the SLAs are worthless, as you’ll get back a small percentage of your cloud expenditure, which is going to be negligible compared to your costs in dealing with the outage. And if your app is built specifically for AWS or Azure then if there’s an extended outage you can’t just deploy it onto a new set of kit from a different supplier. And the chances are pretty good there will be another extended outage, both for Microsoft and for Amazon. But the chances are small that it will happen to both at the same time. So my basic guidance has been: ignore the SLAs, go for better uptime by using two clouds. As soon as you need to scale beyond a single instance, start by scaling out to another cloud. Then scale out to different data centres in both clouds. Then you’ve got dual-cloud, quadruple-datacentre redundancy, so any more scaling you need can be left to the clouds to auto-scale themselves. By running in both clouds, you’ve made your app portable, so in the highly unlikely event that both AWS and Azure go down in multiple regions, you’ll have a deployment package which will let you spin up a new stack on yet another cloud, without having to rework your solution.

    Read the article

  • Actor and Sprite, who should own these properties?

    - by Gerardo Marset
    I'm writing sort of a 2D game engine for making the process of creating games easier. It has two classes, Actor and Sprite. Actor is used for interactive elements (the player, enemies, bullets, a menu, an invisible instance that controls score, etc) and Sprite is used for animated (or not) images with transparency (or not). The actor may have an assigned sprite that represents it on the screen, which may change during the game. E.g. in a top-down action game you may have an actor with a sprite of a little guy that changes when attacking, walking, and facing different directions, etc. Currently the actor has x and y properties (its coordinates in the screen), while the sprite has an index property (the number of the frame currently being shown by the sprite). Since the sprite doesn't know which actor it belongs to (or if it belongs to an actor at all), the actor must pass its x and y coordinates when drawing the sprite. Also, since a actors may reset its sprite each frame (and usually do), the sprite's index property must be passed from the old to the new sprite like so (pseudocode): function change_sprite(new_sprite) old_index = my.sprite.index my.sprite = new_sprite() my.sprite.index = old_index % my.sprite.frames end I always thought this was kind of cumbersome, but it never was a big problem. Now I decided to add support for more properties. Namely a property to draw the sprite rotated, a property to draw it flipped, it a property draw it stretched, etc. These should probably belong to the sprite and not the actor, but if they do, the actor would have to pass them from the old to the new sprite each time it changes... On the other hand, if they belonged to the actor, the actor would have to pass each property to the sprite when drawing it (since the sprite doesn't know which actor it belongs to, and it shouldn't, since sprites aren't just meant to be used by actors, really). Another option I thought of would be having an extra class that owns all these properties (plus index, x and y) and links an actor with a sprite, but that doesn't come without drawbacks. So, what should I do with all these properties? Thanks!

    Read the article

  • In a multidisciplicary team, how much should each member's skills overlap?

    - by spade78
    I've been working in embedded software development for this small startup and our team is pretty small: about 3-4 people. We're responsible for all engineering which involves an RF device controlled by an embedded microcontroller that connects to a PC host which runs some sort of data collection and analysis software. I have come to develop these two guidelines when I work with my colleagues: Define a clear separation of responsibilities and make sure each person's contribution to the final product doesn't overlap. Don't assume your colleagues know everything about their responsibilities. I assume there is some sort of technology that I will need to be competent at to properly interface with the work of my colleagues. The first point is pretty easy for us. I do firmware, one guy does the RF, another does the PC software, and the last does the DSP work. Nothing overlaps in terms of two people's work being mixed into the final product. For that to happen, one guy has to hand off work to another guy who will vet it and integrate it himself. The second point is the heart of my question. I've learned the hard way not to trust the knowledge of my colleagues absolutley no matter how many years experience they claim to have. At least not until they've demonstrated it to me a couple of times. So given that whenever I develop a piece of firmware, if it interfaces with some technology that I don't know then I'll try to learn it and develop a piece of test code that helps me understand what they're doing. That way if my piece of the product comes into conflict with another piece then I have some knowledge about possible causes. For example, the PC guy has started implementing his GUI's in .NET WPF (C#) and using LibUSBdotNET for USB access. So I've been learning C# and the .NET USB library that he uses and I build a little console app to help me understand how that USB library works. Now all this takes extra time and energy but I feel it's justified as it gives me a foothold to confront integration problems. Also I like learning this new stuff so I don't mind. On the other hand I can see how this can turn into a time synch for work that won't make it into the final product and may never turn into a problem. So how much experience/skills overlap do you expect in your teammates relative to your own skills? Does this issue go away as the teams get bigger and more diverse?

    Read the article

  • Proper Usage of Arrays and Functions [closed]

    - by Ssegawa Victor
    Can some one help me write a C code that solves the following problem. PROBLEM Consider the faculty registrar who has to process results for 1st year 1st semester students. Students offer five courses CSC 1100, CSK 1101, CSC 1104, CSC 1105 and CSC 1106. The courses have credit units 4,4,4,3 and 3 respectively. Lecturers provide course work and exam marks. For each course, course work constitutes 40% of the final mark while the exam constitutes 60% of the final mark. The role of the registrar is to Compute the final mark for each student for each course. The final mark must be a whole number Compute the grade and grade point of the students for each course they offered. According to senate regulations, grades and grade points are awarded to final marks according to the following criteria Range Grade Grade Point 90 – 100 A+ 5.0 80 – 89 A 5.0 75 – 79 B+ 4.5 70 – 74 B 4.0 65 – 69 C+ 3.5 60 – 64 C 3.0 55 – 59 D+ 2.5 50 – 54 D 2.0 45 – 49 E 1.5 40 – 44 E- 1.0 0 – 39 F 0.0 Put a comment ‘Retake’ to a student for every course where the Grade Point is less than 2.0 Compute the cumulative grade point average CGPA for each student. The senate formula for CGPA is GGPA =(?_(i=1)^(i=N)¦?CU _i×GP _i ?)/(?_(i=1)^(i=N)¦CU i) Put a comment “Progress” for any student whose GGPA is greater than 2 and “Stay Put” on a student whose CGPA is less than 2 You are required to create a c program that considers a class of 25 students and: 1.Initializes an array ‘student’ which stores student names 2.Initializes arrays for course work and exam for each course. ‘cw_csc_1100’ and ‘ex_csc_1100’ store course work and exam marks (respectively) for CSC 1100. The same approach is considered for all other courses 3.Initializes the coursework and exam marks arrays with marks between 0 and 99 4.Write appropriate functions that will generate the final marks, generate grades, generate grade points, generate cumulative grade points, generate comments for students and comments for courses per student 5.Create appropriate arrays for final marks and insert the data there using the appropriate functions 6.Without having to create any extra arrays, use the functions created to generate a report per student that looks like the one bellow. Student Name: Ngubiri Course Unit Final mark Grade Grade Point Course Comment CSC 1100 43 E- 1.0 Retake CSK 1101 50 D 2.0 CSC 1104 59 D+ 2.5 CSC 1105 70 B 4.0 CSC 1106 65 C+ 3.5 CGPA 2.47 Overall Comment Progress NB It is advisable that the indices are used to identify the owners. Eg if student[x] is John, then cs_csc_100[x] should be a mark for John since the index is the same

    Read the article

  • Monitor the Weather from Your Windows 7 Taskbar

    - by Asian Angel
    Keeping up with the weather forecast can be hard when you are extra busy with work. If you need a simple but nice looking way to integrate weather monitoring into your Taskbar then join us as we look at WeatherBar. Setting Up & Using WeatherBar To get started unzip the following files, place them in an appropriate “Program Files Folder”, and create a shortcut. When you start WeatherBar for the first time you will be presented with the following window and a random/default location. To get WeatherBar set up for your location there are only two settings to adjust (using the “Pencil & Gear Buttons”). Clicking on the “Pencil Button” will open up this small window…enter the name of your location and click “OK”. Next click on the “Gear Button” where you can choose the “Update Interval” and “Measurement Format” that best suits your needs. Click “OK” when finished and WeatherBar will be ready to go. That definitely looks nice. When you are finished viewing this window minimize it to the “Taskbar Icon” instead of clicking on the “Close Button”…otherwise the entire app will close. Left click on the “Taskbar Icon” to bring the window back up… Hovering the mouse over the “Taskbar Icon” provides a nice thumbnail of the weather forecast. Right clicking on the “Taskbar Icon” will display a nice mini forecast. Conclusion While WeatherBar may not be for everyone it does provide a nice easy way to monitor the weather from your “Taskbar” without taking up a lot of room. Links Download WeatherBar Similar Articles Productive Geek Tips Monitor the Weather for Your Location in ChromeCheck Weather Conditions in Real-time with Weather WatcherMonitor CPU, Memory, and Disk IO In Windows 7 with Taskbar MetersTaskbar Eliminator Does What the Name Implies: Hides Your Windows TaskbarBring Misplaced Off-Screen Windows Back to Your Desktop (Keyboard Trick) TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon

    Read the article

  • At $20/month Windows Azure host my website with 99.97% uptime

    - by Gopinath
    Couple of years ago a reliable and decent performing Windows hosting was not affordable to many enthusiastic developers who want to try a startup idea or build a hobby site. I tried to start an ASP.NET website few years ago to provide services like – Mobile Tracing, Vehicle Tracing. But due to high cost of Windows hosting I developed those services using PHP (not an easy task for .NET developer) and hosted on them Linux servers.  But with recent evolution of Windows Azure, hosting ASP.NET websites on highly reliable servers is affordable. Today anyone can host a high responsive and available ASP.NET website for just $20/month using Windows Azure. My website coziie.com is running on Windows Azure and serves close to quarter millions visitors a month with 99.97% of uptime and most of the page load times are less than 3 seconds. All I spend to run this website is just around $20, if you translate it to India rupees its roughly Rs.1000. The web sever of coziie.com is powered by a single Extra Small Web role instance and the backend is powered by a SQL Azure instance. Azure is quite impressive to provide 99.97% of uptime. Response times during peak are around 3 seconds and on nomarl loads it is around 1.5 seconds. Here is the report of uptime provided by Royal Pingdom over last one year For just $20/month Windows Azure takes care of the following apart from hosting Patches up Windows OS to the latest version Upgrades ASP.NET to the latest version – coziie.com is running on ASP.NET MVC 3 and soon I’ll upgrade it to ASP.NET MVC 4 Hosts data on latest and best version Sql Server database SQL Azure maintains 3 copies of database and automatically recovers in case of server failures and disasters. I never worry about database backups/restore. Provides staging environment for deploying applications for testing purpose and move them to production – I upgrade  twice a month on average With Windows Azure I no longer focus on server maintenance or data backups. They are taken up by Microsoft team and I just focus on building my website. Wish there is a low cost Linux version of Windows Azure so that I can stop worrying about server maintenance of this blog!! If you are looking for a Windows hosting, look no further than Windows Azure. If you find $20/month is a bit expensive to start with you may explore Azure Website (sort of shared hosted environment) which is free to start with and as your traffic grows you can move to paid hosting.

    Read the article

  • Quick run through of the WP7 Developer Tools January 2011

    - by mbcrump
    In case you haven’t heard the latest WP7 Developers Tool update was released yesterday and contains a few goodies. First you need to go and grab the bits here. You can install them in any order, but I installed the WindowsPhoneDeveloperResources_en-US_Patch1.msp first. Then the VS10-KB2486994-x86.exe. They install silently. In other words, you would need to check Programs and Features and look in Installed Updates to see if they installed successfully. Like the screenshot below: Once you get them installed you can try out a few new features. Like Copy and Paste. Just fire up your application and put a TextBox on it and Select the Text and you will have the option highlighted in red above the text. Once you select it you will have the option to paste it. (see red rectangle below). Another feature is the Windows Phone Capability Detection Tool – This tool detects the phone capabilities used by your application. This will prevent you from submitting an app to the marketplace that says it uses x feature but really does not. How do you use it? Well navigate out to either directory: %ProgramFiles%\Microsoft SDKs\Windows Phone\v7.0\Tools\CapDetect %ProgramFiles (x86)%\Microsoft SDKs\Windows Phone\v7.0\Tools\CapDetect and run the following command: CapabilityDetection.exe Rules.xml YOURWP7XAPFILEOUTPUTDIRECTORY So, in my example you will see my app only requires the ID_CAP_MICROPHONE. Let’s see what the WmAppManifest.xml says in our WP7 Project: Whoa! That’s a lot of extra stuff we don’t need. We can delete unused capabilities safely now. Some of the other fixes are: (Copied straight from Microsoft) Fixes a text selection bug in pivot and panorama controls. In applications that have pivot or panorama controls that contain text boxes, users can unintentionally change panes when trying to copy text. To prevent this problem, open your application, recompile it, and then resubmit it to the Windows Phone Marketplace. Windows Phone Connect Tool – Allows you to connect your phone to a PC when Zune® software is not running and debug applications that use media APIs. For more information, see How to: Use the Connect Tool. Updated Bing Maps Silverlight Control – Includes improvements to gesture performance when using Bing™ Maps Silverlight® Control. Windows Phone Developer Tools Fix allowing deployment of XAP files over 64 MB in size to physical phone devices for testing and debugging. That’s pretty much it. Thanks again for reading my blog!  Subscribe to my feed CodeProject

    Read the article

  • How do you handle objects that need custom behavior, and need to exist as an entity in the database?

    - by Scott Whitlock
    For a simple example, assume your application sends out notifications to users when various events happen. So in the database I might have the following tables: TABLE Event EventId uniqueidentifier EventName varchar TABLE User UserId uniqueidentifier Name varchar TABLE EventSubscription EventUserId EventId UserId The events themselves are generated by the program. So there are hard-coded points in the application where an event instance is generated, and it needs to notify all the subscribed users. So, the application itself doesn't edit the Event table, except during initial installation, and during an update where a new Event might be created. At some point, when an event is generated, the application needs to lookup the Event and get a list of Users. What's the best way to link the event in the source code to the event in the database? Option 1: Store the EventName in the program as a fixed constant, and look it up by name. Option 2: Store the EventId in the program as a static Guid, and look it up by ID. Extra Credit In other similar circumstances I may want to include custom behavior with the event type. That is, I'll want subclasses of my Event entity class with different behaviors, and when I lookup an event, I want it to return an instance of my subclass. For instance: class Event { public Guid Id { get; } public Guid EventName { get; } public ReadOnlyCollection<EventSubscription> EventSubscriptions { get; } public void NotifySubscribers() { foreach(var eventSubscription in EventSubscriptions) { eventSubscription.Notify(); } this.OnSubscribersNotified(); } public virtual void OnSubscribersNotified() {} } class WakingEvent : Event { private readonly IWaker waker; public WakingEvent(IWaker waker) { if(waker == null) throw new ArgumentNullException("waker"); this.waker = waker; } public override void OnSubscribersNotified() { this.waker.Wake(); base.OnSubscribersNotified(); } } So, that means I need to map WakingEvent to whatever key I'm using to look it up in the database. Let's say that's the EventId. Where do I store this relationship? Does it go in the event repository class? Should the WakingEvent know declare its own ID in a static member or method? ...and then, is this all backwards? If all events have a subclass, then instead of retrieving events by ID, should I be asking my repository for the WakingEvent like this: public T GetEvent<T>() where T : Event { ... // what goes here? ... } I can't be the first one to tackle this. What's the best practice?

    Read the article

  • Can a 10-bit monitor connection preserve all tones in 8-bit sRGB gradients on a wide-gamut monitor?

    - by hjb981
    This question is about color management and the use of a higher color depth, 10 bits per channel (30 bits in total, resulting in 1.07 billion colors, or 1024 shades of gray, sometimes referred to as "deep color") compared to the standard of 8 bits per channel (24 bits in total, 16.7 million colors, 256 shades of gray, sometimes referred to as "true color"). Do not confuse with "32 bit color", which usually refers to standard 8 bit color with an extra channel ("alpha channel") for transparency (used to achieve effects like semi-transparent windows etc). The following can be assumed to be in place: 1: A wide-gamut monitor that supports 10-bit input. Further, it can be assumed that the monitor has been calibrated to its native gamut and that an ICC color profile has been created. 2: A graphics card that supports 10-bit output (and is connected to the monitor via DisplayPort). 3: Drivers for the graphics card that support 10-bit output. If applications that support 10-bit output and color profiles would be used, I would expect them to display images that were saved using different color spaces correctly. For example, both an sRGB and an adobeRGB image should be displayed correctly. If an sRGB image was saved using 8 bits per channel (almost always the case), then the 10-bit signal path would ensure that no tonal gradients were lost in the conversion from the sRGB of the image to the native color space of the monitor. For example: If the image contains a pixel that is pure red in 8 bits (255,0,0), the corresponding value in 10 bits would be (1023,0,0). However, since the monitor has a larger color space than sRGB, sending the signal (1023,0,0) to the monitor would result in a red that was too saturated. Therefore, according to the ICC color profile, the signal would be transformed into a different value with less red saturation, for example (987,0,0). Since there are still plenty of levels left between 0 and 987, all 256 values (0-255) for red in the sRGB color space of the file could be uniquely mapped to color-corrected 10-bit values in the monitor's native color space. However, if the conversion was done in 8 bits, (255,0,0) would be translated to (246,0,0), and there would now only be 247 available levels for the red channel instead of 256, degrading the displayed image quality. My question is: how does this work on Ubuntu? Let's say that I use Firefox (which is color-aware and uses ICC color profiles). Would I get 10-bit processing, thus preserving all levels of an 8-bit picture? What is the situation like for other applications, especially photo applications like Shotwell, Rawtherapee, Darktable, RawStudio, Photivo etc? Does Ubuntu differ from other operating systems (Linux and others) on this point?

    Read the article

  • Memory read/write access efficiency

    - by wolfPack88
    I've heard conflicting information from different sources, and I'm not really sure which one to believe. As such, I'll post what I understand and ask for corrections. Let's say I want to use a 2D matrix. There are three ways that I can do this (at least that I know of). 1: int i; char **matrix; matrix = malloc(50 * sizeof(char *)); for(i = 0; i < 50; i++) matrix[i] = malloc(50); 2: int i; int rowSize = 50; int pointerSize = 50 * sizeof(char *); int dataSize = 50 * 50; char **matrix; matrix = malloc(dataSize + pointerSize); char *pData = matrix + pointerSize - rowSize; for(i = 0; i < 50; i++) { pData += rowSize; matrix[i] = pData; } 3: //instead of accessing matrix[i][j] here, we would access matrix[i * 50 + j] char *matrix = malloc(50 * 50); In terms of memory usage, my understanding is that 3 is the most efficient, 2 is next, and 1 is least efficient, for the reasons below: 3: There is only one pointer and one allocation, and therefore, minimal overhead. 2: Once again, there is only one allocation, but there are now 51 pointers. This means there is 50 * sizeof(char *) more overhead. 1: There are 51 allocations and 51 pointers, causing the most overhead of all options. In terms of performance, once again my understanding is that 3 is the most efficient, 2 is next, and 1 is least efficient. Reasons being: 3: Only one memory access is needed. We will have to do a multiplication and an addition as opposed to two additions (as in the case of a pointer to a pointer), but memory access is slow enough that this doesn't matter. 2: We need two memory accesses; once to get a char *, and then to the appropriate char. Only two additions are performed here (once to get to the correct char * pointer from the original memory location, and once to get to the correct char variable from wherever the char * points to), so multiplication (which is slower than addition) is not required. However, on modern CPUs, multiplication is faster than memory access, so this point is moot. 1: Same issues as 2, but now the memory isn't contiguous. This causes cache misses and extra page table lookups, making it the least efficient of the lot. First and foremost: Is this correct? Second: Is there an option 4 that I am missing that would be even more efficient?

    Read the article

  • Headaches using distributed version control for traditional teams?

    - by J Cooper
    Though I use and like DVCS for my personal projects, and can totally see how it makes managing contributions to your project from others easier (e.g. your typical Github scenario), it seems like for a "traditional" team there could be some problems over the centralized approach employed by solutions like TFS, Perforce, etc. (By "traditional" I mean a team of developers in an office working on one project that no one person "owns", with potentially everyone touching the same code.) A couple of these problems I've foreseen on my own, but please chime in with other considerations. In a traditional system, when you try to check your change in to the server, if someone else has previously checked in a conflicting change then you are forced to merge before you can check yours in. In the DVCS model, each developer checks in their changes locally and at some point pushes to some other repo. That repo then has a branch of that file that 2 people changed. It seems that now someone must be put in charge of dealing with that situation. A designated person on the team might not have sufficient knowledge of the entire codebase to be able to handle merging all conflicts. So now an extra step has been added where someone has to approach one of those developers, tell him to pull and do the merge and then push again (or you have to build an infrastructure that automates that task). Furthermore, since DVCS tends to make working locally so convenient, it is probable that developers could accumulate a few changes in their local repos before pushing, making such conflicts more common and more complicated. Obviously if everyone on the team only works on different areas of the code, this isn't an issue. But I'm curious about the case where everyone is working on the same code. It seems like the centralized model forces conflicts to be dealt with quickly and frequently, minimizing the need to do large, painful merges or have anyone "police" the main repo. So for those of you who do use a DVCS with your team in your office, how do you handle such cases? Do you find your daily (or more likely, weekly) workflow affected negatively? Are there any other considerations I should be aware of before recommending a DVCS at my workplace?

    Read the article

  • Software Tuned to Humanity

    - by Phil Factor
    I learned a great deal from a cynical old programmer who once told me that the ideal length of time for a compiler to do its work was the same time it took to roll a cigarette. For development work, this is oh so true. After intently looking at the editing window for an hour or so, it was a relief to look up, stretch, focus the eyes on something else, and roll the possibly-metaphorical cigarette. This was software tuned to humanity. Likewise, a user’s perception of the “ideal” time that an application will take to move from frame to frame, to retrieve information, or to process their input has remained remarkably static for about thirty years, at around 200 ms. Anything else appears, and always has, to be either fast or slow. This could explain why commercial applications, unlike games, simulations and communications, aren’t noticeably faster now than they were when I started programming in the Seventies. Sure, they do a great deal more, but the SLAs that I negotiated in the 1980s for application performance are very similar to what they are nowadays. To prove to myself that this wasn’t just some rose-tinted misperception on my part, I cranked up a Z80-based Jonos CP/M machine (1985) in the roof-space. Within 20 seconds from cold, it had loaded Wordstar and I was ready to write. OK, I got it wrong: some things were faster 30 years ago. Sure, I’d now have had all sorts of animations, wizzy graphics, and other comforting features, but it seems a pity that we have used all that extra CPU and memory to increase the scope of what we develop, and the graphical prettiness, but not to speed the processes needed to complete a business procedure. Never mind the weight, the response time’s great! To achieve 200 ms response times on a Z80, or similar, performance considerations influenced everything one did as a developer. If it meant writing an entire application in assembly code, applying every smart algorithm, and shortcut imaginable to get the application to perform to spec, then so be it. As a result, I’m a dyed-in-the-wool performance freak and find it difficult to change my habits. Conversely, many developers now seem to feel quite differently. While all will acknowledge that performance is important, it’s no longer the virtue is once was, and other factors such as user-experience now take precedence. Am I wrong? If not, then perhaps we need a new school of development technique to rival Agile, dedicated once again to producing applications that smoke the rear wheels rather than pootle elegantly to the shops; that forgo skeuomorphism, cute animation, or architectural elegance in favor of the smell of hot rubber. I struggle to name an application I use that is truly notable for its blistering performance, and would dearly love one to do my everyday work – just as long as it doesn’t go faster than my brain.

    Read the article

  • Alternatives to multiple inheritance for my architecture (NPCs in a Realtime Strategy game)?

    - by Brettetete
    Coding isn't that hard actually. The hard part is to write code that makes sense, is readable and understandable. So I want to get a better developer and create some solid architecture. So I want to do create an architecture for NPCs in a video-game. It is a Realtime Strategy game like Starcraft, Age of Empires, Command & Conquers, etc etc.. So I'll have different kinds of NPCs. A NPC can have one to many abilities (methods) of these: Build(), Farm() and Attack(). Examples: Worker can Build() and Farm() Warrior can Attack() Citizen can Build(), Farm() and Attack() Fisherman can Farm() and Attack() I hope everything is clear so far. So now I do have my NPC Types and their abilities. But lets come to the technical / programmatical aspect. What would be a good programmatic architecture for my different kinds of NPCs? Okay I could have a base class. Actually I think this is a good way to stick with the DRY principle. So I can have methods like WalkTo(x,y) in my base class since every NPC will be able to move. But now lets come to the real problem. Where do I implement my abilities? (remember: Build(), Farm() and Attack()) Since the abilities will consists of the same logic it would be annoying / break DRY principle to implement them for each NPC (Worker,Warrior, ..). Okay I could implement the abilities within the base class. This would require some kind of logic that verifies if a NPC can use ability X. IsBuilder, CanBuild, .. I think it is clear what I want to express. But I don't feel very well with this idea. This sounds like a bloated base class with too much functionality. I do use C# as programming language. So multiple inheritance isn't an opinion here. Means: Having extra base classes like Fisherman : Farmer, Attacker won't work.

    Read the article

  • Disk drive for / not ready on boot after upgrade from 10.04 to 12.04

    - by Mathieu M-Gosselin
    After upgrading (using the Upgrade button from the update manager) from 10.04.4 to 12.04.1, I cannot boot anymore. Upon booting, I am greeted with the Ubuntu logo and the error "The disk drive for / is not ready yet or not present". I have the option to wait, to skip and to access a basic shell. Waiting overnight did nothing, skipping just gives me the same error for /tmp, /home, then for a UUID and finally it just goes to a black screen with a white "_" in the top left corner. My setup is a dual boot one with XP on a single hard drive, I use separate partitions for / and /home. Back in the day I installed 8.04 directly from the CD while leaving a partition for XP, which I installed after. This setup had never caused any such issues, even when upgrading from 8.04 to 10.04. I have done plenty of research regarding this issue, as many others seem to have had similar issues after doing the same upgrade as me. However, while for most what fixed the problem was running: apt-get -f install after remounting / in read-write, it didn't do it for me. I got dependency errors (see here), which I also investigated. I found https://bugs.launchpad.net/ubuntu/+source/python-defaults/+bug/990740 where most people say the solution that worked is (prior to running the above command) running: apt-get install -o APT::Immediate-Configure=false -f apt python-minimal but that also got me a lot of dependencies errors as output (see here), similar to #34 in the above thread. I also read that running: dpkg --configure -a could help, at first it wouldn't run because it had trouble parsing /var/lib/dpkg/status since there was an extra blank line in a package description (see https://bugs.launchpad.net/ubuntu/+source/dpkg/+bug/916799) but I removed it using vim (and then reran the command). It still gives me output that looks like an error, though. Here it is: http://paste.ubuntu.com/1338074/. I also tried re-running the above apt-get commands after that, to no avail. I'm running out of things to try in the hope of getting this fixed, your help would be very much appreciated! Thank you in advance.

    Read the article

  • Indefinite loops where the first time is different

    - by George T
    This isn't a serious problem or anything someone has asked me to do, just a seemingly simple thing that I came up with as a mental exercise but has stumped me and which I feel that I should know the answer to already. There may be a duplicate but I didn't manage to find one. Suppose that someone asked you to write a piece of code that asks the user to enter a number and, every time the number they entered is not zero, says "Error" and asks again. When they enter zero it stops. In other words, the code keeps asking for a number and repeats until zero is entered. In each iteration except the first one it also prints "Error". The simplest way I can think of to do that would be something like the folloing pseudocode: int number = 0; do { if(number != 0) { print("Error"); } print("Enter number"); number = getInput(); }while(number != 0); While that does what it's supposed to, I personally don't like that there's repeating code (you test number != 0 twice) -something that should generally be avoided. One way to avoid this would be something like this: int number = 0; while(true) { print("Enter number"); number = getInput(); if(number == 0) { break; } else { print("Error"); } } But what I don't like in this one is "while(true)", another thing to avoid. The only other way I can think of includes one more thing to avoid: labels and gotos: int number = 0; goto question; error: print("Error"); question: print("Enter number"); number = getInput(); if(number != 0) { goto error; } Another solution would be to have an extra variable to test whether you should say "Error" or not but this is wasted memory. Is there a way to do this without doing something that's generally thought of as a bad practice (repeating code, a theoretically endless loop or the use of goto)? I understand that something like this would never be complex enough that the first way would be a problem (you'd generally call a function to validate input) but I'm curious to know if there's a way I haven't thought of.

    Read the article

  • Handy Generic JQuery Functions

    - by Steve Wilkes
    I was a bit of a late-comer to the JQuery party, but now I've been using it for a while it's given me a host of options for adding extra flair to the client side of my applications. Here's a few generic JQuery functions I've written which can be used to add some neat little features to a page. Just call any of them from a document ready function. Apply JQuery Themeroller Styles to all Page Buttons   The JQuery Themeroller is a great tool for creating a theme for a site based on colours and styles for particular page elements. The JQuery.UI library then provides a set of functions which allow you to apply styles to page elements. This function applies a JQuery Themeroller style to all the buttons on a page - as well as any elements which have a button class applied to them - and then makes the mouse pointer turn into a cursor when you mouse over them: function addCursorPointerToButtons() {     $("button, input[type='submit'], input[type='button'], .button") .button().css("cursor", "pointer"); } Automatically Remove the Default Value from a Select Box   Required drop-down select boxes often have a default option which reads 'Please select...' (or something like that), but once someone has selected a value, there's no need to retain that. This function removes the default option from any select boxes on the page which have a data-val-remove-default attribute once one of the non-default options has been chosen: function removeDefaultSelectOptionOnSelect() {     $("select[data-val-remove-default='']").change(function () {         var sel = $(this);         if (sel.val() != "") { sel.children("option[value='']:first").remove(); }     }); } Automatically add a Required Label and Stars to a Form   It's pretty standard to have a little * next to required form field elements. This function adds the text * Required to the top of the first form on the page, and adds *s to any element within the form with the class editor-label and a data-val-required attribute: function addRequiredFieldLabels() {     var elements = $(".editor-label[data-val-required='']");     if (!elements.length) { return; }     var requiredString = "<div class='editor-required-key'>* Required</div>";     var prependString = "<span class='editor-required-label'> * </span>"; var firstFormOnThePage = $("form:first");     if (!firstFormOnThePage.children('div.editor-required-key').length) {         firstFormOnThePage.prepend(requiredString);     }     elements.each(function (index, value) { var formElement = $(this);         if (!formElement.children('span.editor-required-label').length) {             formElement.prepend(prependString);         }     }); } I hope those come in handy :)

    Read the article

  • C# 5: At last, async without the pain

    - by Alex.Davies
    For me, the best feature in Visual Studio 11 is the async and await keywords that come with C# 5. I am a big fan of asynchronous programming: it frees up resources, in particular the thread that a piece of code needs to run in. That lets that thread run something else, while waiting for your long-running operation to complete. That's really important if that thread is the UI thread, or if it's holding a lock because it accesses some data structure. Before C# 5, I think I was about the only person in the world who really cared about asynchronous programming. The trouble was that you had to go to extreme lengths to make code asynchronous. I would forever be writing methods that, instead of returning a value, accepted an extra argument that is a "continuation". Then, when calling the method, I'd have to pass a lambda in to it, which contained all the stuff that needed to happen after the method finished. Here is a real snippet of code that is in .NET Demon: m_BuildControl.FilterEnabledForBuilding(     projects,     enabledProjects = m_OutOfDateProjectFinder.FilterNeedsBuilding(         enabledProjects,         newDirtyProjects =         {             // Mark any currently broken projects as dirty             newDirtyProjects.UnionWith(m_BrokenProjects);             // Copy what we found into the set of dirty things             m_DirtyProjects = newDirtyProjects;             RunSomeBuilds();         })); It's just obtuse. Who puts a lambda inside a lambda like that? Well, me obviously. But surely enabledProjects should just be the return value of FilterEnabledForBuilding? And newDirtyProjects should just be the return value of FilterNeedsBuilding? C# 5 async/await lets you write asynchronous code without it looking so stupid. Here's what I plan to change that code to, once we upgrade to VS 11: var enabledProjects = await m_BuildControl.FilterEnabledForBuilding(projects); var newDirtyProjects = await m_OutOfDateProjectFinder.FilterNeedsBuilding(enabledProjects); // Mark any currently broken projects as dirty newDirtyProjects.UnionWith(m_BrokenProjects); // Copy what we found into the set of dirty things m_DirtyProjects = newDirtyProjects; RunSomeBuilds(); Much easier to read! But how is this the same code? If we were on the UI thread, doesn't the UI thread have to block while FilterEnabledForBuilding runs? No, it doesn't, and that's the magic of the await keyword! It cuts your method up into its constituent pieces, much like I did manually with lambdas before. When you run it, only the piece up to the first await actually runs. The rest is passed to FilterEnabledForBuilding as a continuation, which will get called back whenever that method is finished. In the meantime, our thread returns, and can go back to making the UI responsive, or whatever else threads do in their spare time. This is actually a massive simplification, and if you're interested in all the gory details, and speed hacks that the await keyword actually does for you, I recommend Jon Skeet's blog posts about it.

    Read the article

  • Function like C# properties?

    - by alan2here
    I was directed here from SO as a better stack exchange site for this question. I've been thinking about the neatness and expression of C# properties over functions, although they only currently work where no parameters are used, and wondered. Is is possible, and if so why not, to have a stand alone function like C# property. For example: public class test { private byte n = 4; public test() { func = 2; byte n2 = func; func; } private byte func { get { return n; } set { n = value; } func { n++; } } } edit: Sorry for the vagueness first time round. I'm going to add some info and motivation. The 'n++' here is just a simple example, a placeholder, it's not intended to be representative of the actual code that would be used. I'm also looking at this from the point of view of looking at the property command as is, not in the context of using it for 'get_xyz' and 'set_xyz' member functions, which is certainly useful, but of instead comparing it more abstractly to functions and other programic elements. A 'get' property can be used instead of a function that takes no parameters, and syntactically they are perhaps only aesthetically, but as I see it noticeably nicer. However, properties also add the potential for an extra layer of polymorphism, one that relates to the 'func = 4;' getting, 'int n = func;' setting or 'func;' function like context in which they are used as well as the more common parameter based polymorphism. Potentially allowing for a lot of expression and contextual information reguarding how other would use your functions. As in many places uses and definitions would remain the same, it shouldn't break existing code. private byte func { get { } get bool { } set { } func { } func(bool) { } func(byte, myType) { } // etc... } So a read only function would look like this: private byte func { get { } } A normal function like this: private void func { func { } } A function with parameter polymorphism like this: private byte func { func(bool) { } func(byte, myType) { } } And a function that could return a value, or just compute, depending on the context it is used, that also has more conventional parameter polymorphism as well, like so: private byte func { get { } func(bool) { } func(byte, myType) { } }

    Read the article

  • What is the best way to implement collision detection using Bullet physics engine and a track generated from a curve?

    - by tigrou
    I am developing a small racing game were the track is generated from a curve. As said above, the track is generated, but not infinite. The track of one level could fit with no problem in memory and will contain a reasonably small amount of triangles. For collisions, I would like to use Bullet physics engine and know what is the best way to handle collisions with the track efficiently. NOTE : The track will be stored as a static rigid body (mass = 0). The player will be represented by a sphere shape for collisions. Here is some possibilities i have in mind : Create one rigid body, then, put all triangles of the track (except non collidable stuff) into it. Result : 1 body with many triangles (eg : 30000 triangles) Split the track into several sections (eg: 10 sections). Then, for each section, create a rigid body and put corresponding triangles in it. Result : small amount of bodies with relatively small amount of triangles (eg : 1500 triangles per section). Split the track into many sub-sections (eg : 1200 sections). Here one subsection = very small step when generating the curve. Again for each sub-section, create a body and put triangles in it. Result : many bodies with very small amount of triangles (eg : 20 triangles). Advantage : it could be possible to "extra data" to each of the subsection, that could be used when handling collisions. Same as 2, but only put sections N and N+1 in physics engine (where N = current section where the player is). When player reach section N+1, unload section N and load section N+2 and so on... Issue : harder to implement, problems if the player suddenly "jump" from one section to another (eg : player fly away from section N, and fall on section N + 4 that was underneath : no collision handled, player will fall into void ) Same as 4, but with many sub-sections. Issues : since subsections are very small there will be constantly new bodies added and removed to physics engine at runtime. Possibilities for player to accidently skip some sections and fall into the void are higher than 4.

    Read the article

  • Using the @ in SQL Azure Connections

    - by BuckWoody
    The other day I was working with a client on an application they were changing to a hybrid architecture – some data on-premise and other data in SQL Azure and Windows Azure Blob storage. I had them make a couple of corrections - the first was that all communications to SQL Azure need to be encrypted. It’s a simple addition to the connection string, depending on the library you use. Which brought up another interesting point. They had been using something that looked like this, using the .NET provider: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=LoginName;Password=myPassword; Trusted_Connection=False;Encrypt=True; This includes most of the formatting needed for SQL Azure. It specifies TCP as the transport mechanism, the database name is included, Trusted_Connection is off, and encryption is on. But it needed one more change: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=[LoginName]@[serverName];Password=myPassword; Trusted_Connection=False;Encrypt=True; Notice the difference? It’s the User ID parameter. It includes the @ symbol and the name of the server – not the whole DNS name, just the server name itself. The developers were a bit surprised, since it had been working with the first format that just used the user name. Why did both work, and why is one better than the other? It has to do with the connection library you use. For most libraries, the user name is enough. But for some libraries (subject to change so I don’t list them here) the server name parameter isn’t sent in the way the load balancer understands, so you need to include the server name right in the login, so the system can parse it correctly. Keep in mind, the string limit for that is 128 characters – so take the @ symbol and the server name into consideration for user names. The user connection info is detailed here: http://msdn.microsoft.com/en-us/library/ee336268.aspx Upshot? Include the @servername on your connection string just to be safe. And plan for that extra space…  

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >