Search Results

Search found 6515 results on 261 pages for 'half life'.

Page 246/261 | < Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >

  • Using SQL Source Control with Fortress or Vault &ndash; Part 2

    - by AjarnMark
    In Part 1, I started talking about using Red-Gate’s newest version of SQL Source Control and how I really like it as a viable method to source control your database development.  It looks like this is going to turn into a little series where I will explain how we have done things in the past, and how life is different with SQL Source Control.  I will also explain some of my philosophy and methodology around deployment with these tools.  But for now, let’s talk about some of the good and the bad of the tool itself. More Kudos and Features I mentioned previously how impressed I was with the responsiveness of Red-Gate’s team.  I have been having an ongoing email conversation with Gyorgy Pocsi, and as I have run into problems or requested things behave a little differently, it has not been more than a day or two before a new Build is ready for me to download and test.  Quite impressive! I’m sure much of the requests I put in were already in the plans, so I can’t really take credit for them, but throughout this conversation, Red-Gate has implemented several features that were not in the first Early Access version.  Those include: Honoring the Fortress configuration option to require Work Item (Bug) IDs on check-ins. Adding the check-in comment text as a comment to the Work Item. Adding the list of checked-in files, along with the Fortress links for automatic History and DIFF view Updating the status of a Work Item on check-in (e.g. setting the item to Complete or, in our case “Dev-Complete”) Support for the Fortress 2.0 API, and not just the Vault Pro 5.1 API.  (See later notes regarding support for Fortress 2.0). These were all features that I felt we really needed to have in-place before I could honestly consider converting my team to using SQL Source Control on a regular basis.  Now that I have those, my only excuse is not wanting to switch boats on the team mid-stream.  So when we wrap up our current release in a few weeks, we will make the jump.  In the meantime, I will continue to bang on it to make sure it is stable.  It passed one test for stability when I did a test load of one of our larger database schemas into Fortress with SQL Source Control.  That database has about 150 tables, 200 User-Defined Functions and nearly 900 Stored Procedures.  The initial load to source control went smoothly and took just a brief amount of time. Warnings Remember that this IS still in pre-release stage and while I have not had any problems after that first hiccup I wrote about last time, you still need to treat it with a healthy respect.  As I understand it, the RTM is targeted for February.  There are a couple more features that I hope make it into the final release version, but if not, they’ll probably be coming soon thereafter.  Those are: A Browse feature to let me lookup the Work Item ID instead of having to remember it or look back in my Item details.  This is just a matter of convenience. I normally have my Work Item list open anyway, so I can easily look it up, but hey, why not make it even easier. A multi-line comment area.  The current space for writing check-in comments is a single-line text box.  I would like to have a multi-line space as I sometimes write lengthy commentary.  But I recognize that it is a struggle to get most developers to put in more than the word “fixed” as their comment, so this meets the need of the majority as-is, and it’s not a show-stopper for us. Merge.  SQL Source Control currently does not have a Merge feature.  If two or more people make changes to the same database object, you will get a warning of the conflict and have to choose which one wins (and then manually edit to include the others’ changes).  I think it unlikely you will run into actual conflicts in Stored Procedures and Functions, but you might with Views or Tables.  This will be nice to have, but I’m not losing any sleep over it.  And I have multiple tools at my disposal to do merges manually, so really not a show-stopper for us. Automation has its limits.  As cool as this automation is, it has its limits and there are some changes that you will be better off scripting yourself.  For example, if you are refactoring table definitions, and want to change a column name, you can write that as a quick sp_rename command and preserve the data within that column.  But because this tool is looking just at a before and after picture, it cannot tell that you just renamed a column.  To the tool, it looks like you dropped one column and added another.  This is not a knock against Red-Gate.  All automated scripting tools have this issue, unless the are actively monitoring your every step to know exactly what you are doing.  This means that when you go to Deploy your changes, SQL Compare will script the change as a column drop and add, or will attempt to rebuild the entire table.  Unfortunately, neither of these approaches will preserve the existing data in that column the way an sp_rename will, and so you are better off scripting that change yourself.  Thankfully, SQL Compare will produce warnings about the potential loss of data before it does the actual synchronization and give you a chance to intercept the script and do it yourself. Also, please note that the current official word is that SQL Source Control supports Vault Professional 5.1 and later.  Vault Professional is the new name for what was previously known as Fortress.  (You can read about the name change on SourceGear’s site.)  The last version of Fortress was 2.x, and the API for Fortress 2.x is different from the API for Vault Pro.  At my company, we are currently running Fortress 2.0, with plans to upgrade to Vault Pro early next year.  Gyorgy was able to come up with a work-around for me to be able to use SQL Source Control with Fortress 2.0, even though it is not officially supported.  If you are using Fortress 2.0 and want to use SQL Source Control, be aware that this is not officially supported, but it is working for us, and you can probably get the work-around instructions from Red-Gate if you’re really, really nice to them. Upcoming Topics Some of the other topics I will likely cover in this series over the next few weeks are: How we used to do source control back in the old days (a few weeks ago) before SQL Source Control was available to Vault users What happens when you restore a database that is linked to source control Handling multiple development branches of source code Concurrent Development practices and handling Conflicts Deployment Tips and Best Practices A recap after using the tool for a while

    Read the article

  • VLC 2.0.3 on Lubuntu 12.04: No audio?

    - by drezabek
    I am on Lubuntu 12.04, and I have installed VLC media player version 2.0.3. When I try and play an audio file, it appears to load fine, and the media position bar displays the progress, and it says it is playing, but I can't here any thing through my speakers. I can hear game audio, web audio, and audio from SMPlayer just fine, but with VLC, I can't here anything. Below is the "Messages" output with the verbosity option set to "2 (debug)" main debug: processing request item: The Bottom, node: Playlist, skip: 0 main debug: resyncing on The Bottom main debug: The Bottom is at 0 main debug: starting playback of the new playlist item main debug: resyncing on The Bottom main debug: The Bottom is at 0 main debug: creating new input thread main debug: Creating an input for 'The Bottom' main debug: TIMER input launching for 'Floex - Machinarium Soundtrack - 01 The Bottom.flac' : 23.706 ms - Total 23.706 ms / 1 intvls (Avg 23.706 ms) main debug: using timeshift granularity of 50 MiB, in path '/tmp' main debug: `file:///home/doug/Music/unsorted/Floex%20-%20Machinarium%20Soundtrack/Floex%20-%20Machinarium%20Soundtrack%20-%2001%20The%20Bottom.flac' gives access `file' demux `' path `/home/doug/Music/unsorted/Floex%20-%20Machinarium%20Soundtrack/Floex%20-%20Machinarium%20Soundtrack%20-%2001%20The%20Bottom.flac' main debug: creating demux: access='file' demux='' location='/home/doug/Music/unsorted/Floex%20-%20Machinarium%20Soundtrack/Floex%20-%20Machinarium%20Soundtrack%20-%2001%20The%20Bottom.flac' file='/home/doug/Music/unsorted/Floex - Machinarium Soundtrack/Floex - Machinarium Soundtrack - 01 The Bottom.flac' main debug: looking for access_demux module: 3 candidates main debug: no access_demux module matching "file" could be loaded main debug: TIMER module_need() : 2.332 ms - Total 2.332 ms / 1 intvls (Avg 2.332 ms) main debug: creating access 'file' location='/home/doug/Music/unsorted/Floex%20-%20Machinarium%20Soundtrack/Floex%20-%20Machinarium%20Soundtrack%20-%2001%20The%20Bottom.flac', path='/home/doug/Music/unsorted/Floex - Machinarium Soundtrack/Floex - Machinarium Soundtrack - 01 The Bottom.flac' main debug: looking for access module: 2 candidates filesystem debug: opening file `/home/doug/Music/unsorted/Floex - Machinarium Soundtrack/Floex - Machinarium Soundtrack - 01 The Bottom.flac' main debug: using access module "filesystem" main debug: TIMER module_need() : 0.762 ms - Total 0.762 ms / 1 intvls (Avg 0.762 ms) main debug: Using stream method for AStream* main debug: starting pre-buffering main debug: received first data after 0 ms main debug: pre-buffering done 1024 bytes in 0s - 43478 KiB/s main debug: looking for stream_filter module: 7 candidates main debug: no stream_filter module matching "any" could be loaded main debug: TIMER module_need() : 0.236 ms - Total 0.236 ms / 1 intvls (Avg 0.236 ms) main debug: looking for stream_filter module: 1 candidate main debug: using stream_filter module "stream_filter_record" main debug: TIMER module_need() : 0.156 ms - Total 0.156 ms / 1 intvls (Avg 0.156 ms) main debug: creating demux: access='file' demux='' location='/home/doug/Music/unsorted/Floex%20-%20Machinarium%20Soundtrack/Floex%20-%20Machinarium%20Soundtrack%20-%2001%20The%20Bottom.flac' file='/home/doug/Music/unsorted/Floex - Machinarium Soundtrack/Floex - Machinarium Soundtrack - 01 The Bottom.flac' main debug: looking for demux module: 54 candidates flacsys debug: Picture type=3 mime=image/png description='' file length=679371 qt4 debug: IM: Setting an input main debug: looking for packetizer module: 21 candidates main debug: using packetizer module "packetizer_flac" main debug: TIMER module_need() : 0.211 ms - Total 0.211 ms / 1 intvls (Avg 0.211 ms) main debug: using demux module "flacsys" main debug: TIMER module_need() : 4.023 ms - Total 4.023 ms / 1 intvls (Avg 4.023 ms) main debug: looking for a subtitle file in /home/doug/Music/unsorted/Floex - Machinarium Soundtrack/ main debug: looking for meta reader module: 2 candidates main debug: using meta reader module "taglib" main debug: TIMER module_need() : 5.245 ms - Total 5.245 ms / 1 intvls (Avg 5.245 ms) main debug: removing module "taglib" main debug: `file:///home/doug/Music/unsorted/Floex%20-%20Machinarium%20Soundtrack/Floex%20-%20Machinarium%20Soundtrack%20-%2001%20The%20Bottom.flac' successfully opened main debug: selecting program id=0 main debug: looking for decoder module: 30 candidates main debug: using decoder module "flac" main debug: TIMER module_need() : 0.442 ms - Total 0.442 ms / 1 intvls (Avg 0.442 ms) main debug: Buffering 0% flac debug: decode STREAMINFO flac debug: channels:2 samplerate:44100 bitspersamples:16 flac debug: STREAMINFO decoded main debug: Buffering 30% main debug: recycling audio output main debug: looking for audio output module: 3 candidates main debug: Buffering 61% pulse debug: using stereo channel map pulse debug: using library version 1.1.0 pulse debug: (compiled with version 1.1.0, protocol 26) main debug: Buffering 92% main debug: Stream buffering done (371 ms in 2 ms) pulse debug: connected locally to unix:/home/doug/.pulse/dce22254e867f905188a2ce200000003-runtime/native as client #14 pulse debug: using protocol 26, server protocol 26 pulse debug: using buffer metrics: maxlength=4194304, tlength=9880, prebuf=0, minreq=3528 pulse debug: connected to sink 0: alsa_output.pci-0000_00_14.2.analog-stereo main debug: using audio output module "pulse" main debug: TIMER module_need() : 4.571 ms - Total 4.571 ms / 1 intvls (Avg 4.571 ms) main debug: output 's16l' 44100 Hz Stereo frame=1 samples/4 bytes main debug: mixer 'f32l' 44100 Hz Stereo frame=1 samples/8 bytes main debug: filter(s) 'f32l'->'s16l' 44100 Hz->44100 Hz Stereo->Stereo main debug: looking for audio filter module: 14 candidates audio_format debug: f32l->s16l, bits per sample: 32->16 main debug: using audio filter module "audio_format" main debug: TIMER module_need() : 0.187 ms - Total 0.187 ms / 1 intvls (Avg 0.187 ms) main debug: conversion pipeline completed main debug: looking for audio mixer module: 2 candidates main debug: using audio mixer module "float32_mixer" main debug: TIMER module_need() : 0.125 ms - Total 0.125 ms / 1 intvls (Avg 0.125 ms) main debug: input 's16l' 44100 Hz Stereo frame=1 samples/4 bytes main debug: looking for audio filter module: 1 candidate scaletempo debug: format: 44100 rate, 2 nch, 4 bps, fl32 scaletempo debug: params: 30 stride, 0.200 overlap, 14 search scaletempo debug: 1.000 scale, 1323.000 stride_in, 1323 stride_out, 1059 standing, 264 overlap, 617 search, 2204 queue, fl32 mode main debug: using audio filter module "scaletempo" main debug: TIMER module_need() : 0.233 ms - Total 0.233 ms / 1 intvls (Avg 0.233 ms) main debug: filter(s) 's16l'->'f32l' 44100 Hz->44100 Hz Stereo->Stereo pulse debug: listing sink alsa_output.pci-0000_00_14.2.analog-stereo (0): Built-in Audio Analog Stereo main debug: looking for audio filter module: 14 candidates audio_format debug: s16l->f32l, bits per sample: 16->32 main debug: using audio filter module "audio_format" main debug: TIMER module_need() : 0.147 ms - Total 0.147 ms / 1 intvls (Avg 0.147 ms) main debug: conversion pipeline completed pulse debug: base volume: 65536 main debug: looking for audio filter module: 1 candidate equalizer debug: equalizer loaded for 44100 Hz with 10 bands 2 pass equalizer debug: 60 Hz -> factor:0.000000 alpha:0.003013 beta:0.993973 gamma:1.993901 equalizer debug: 170 Hz -> factor:0.000000 alpha:0.008490 beta:0.983019 gamma:1.982437 equalizer debug: 310 Hz -> factor:0.000000 alpha:0.015374 beta:0.969252 gamma:1.967331 equalizer debug: 600 Hz -> factor:0.000000 alpha:0.029328 beta:0.941343 gamma:1.934254 equalizer debug: 1000 Hz -> factor:0.000000 alpha:0.047918 beta:0.904163 gamma:1.884869 equalizer debug: 3000 Hz -> factor:0.000000 alpha:0.130408 beta:0.739184 gamma:1.582718 equalizer debug: 6000 Hz -> factor:0.000000 alpha:0.226555 beta:0.546889 gamma:1.015267 equalizer debug: 12000 Hz -> factor:0.000000 alpha:0.344937 beta:0.310127 gamma:-0.181410 equalizer debug: 14000 Hz -> factor:0.000000 alpha:0.366438 beta:0.267123 gamma:-0.521151 equalizer debug: 16000 Hz -> factor:0.000000 alpha:0.379009 beta:0.241981 gamma:-0.808451 main debug: using audio filter module "equalizer" main debug: TIMER module_need() : 0.353 ms - Total 0.353 ms / 1 intvls (Avg 0.353 ms) main debug: filter(s) 'f32l'->'f32l' 44100 Hz->44100 Hz Stereo->Stereo main debug: conversion pipeline completed main debug: looking for visualization2 module: 1 candidate main debug: looking for text renderer module: 2 candidates freetype debug: Building font databases. freetype debug: Took 0 microseconds freetype debug: Using Serif Bold as font from file /usr/share/fonts/truetype/ttf-dejavu/DejaVuSans.ttf freetype debug: using fontsize: 2 main debug: using text renderer module "freetype" main debug: TIMER module_need() : 3.278 ms - Total 3.278 ms / 1 intvls (Avg 3.278 ms) main debug: looking for video filter2 module: 18 candidates swscale debug: 32x32 chroma: YUVA -> 16x16 chroma: RGBA with scaling using Bicubic (good quality) main debug: using video filter2 module "swscale" main debug: TIMER module_need() : 1.037 ms - Total 1.037 ms / 1 intvls (Avg 1.037 ms) main debug: looking for video filter2 module: 18 candidates yuvp debug: YUVP to YUVA converter main debug: using video filter2 module "yuvp" main debug: TIMER module_need() : 0.156 ms - Total 0.156 ms / 1 intvls (Avg 0.156 ms) main debug: Deinterlacing available main debug: deinterlace 0, mode blend, is_needed 0 main debug: Opening vout display wrapper main debug: looking for vout display module: 6 candidates main debug: looking for vout window xid module: 4 candidates qt4 debug: requesting video... qt4 debug: Video was requested 0, 0 main debug: using vout window xid module "qt4" main debug: TIMER module_need() : 61.671 ms - Total 61.671 ms / 1 intvls (Avg 61.671 ms) main debug: looking for inhibit module: 2 candidates main debug: using inhibit module "xdg_screensaver" main debug: TIMER module_need() : 0.336 ms - Total 0.336 ms / 1 intvls (Avg 0.336 ms) xdg_screensaver debug: started xdg-screensaver (PID = 6682) xcb_xv debug: connected to X11.0 server xcb_xv debug: vendor : The X.Org Foundation xcb_xv debug: version: 11103000 xcb_xv debug: using screen 0x15a xcb_xv debug: using XVideo extension v2.2 xcb_xv debug: using adaptor NV17 Video Texture xcb_xv debug: using port 310 xcb_xv debug: using image format 0x30323449 xcb_xv debug: using X11 visual ID 0x21 (depth: 24) xcb_xv debug: using X11 window 0x03400000 xcb_xv debug: using X11 graphic context 0x03400002 main debug: VoutDisplayEvent 'fullscreen' 0 main debug: VoutDisplayEvent 'resize' 800x500 window main debug: using vout display module "xcb_xv" main debug: TIMER module_need() : 69.890 ms - Total 69.890 ms / 1 intvls (Avg 69.890 ms) main debug: original format sz 800x500, of (0,0), vsz 800x500, 4cc I420, sar 1:1, msk r0x0 g0x0 b0x0 main debug: removing module "freetype" main debug: looking for text renderer module: 2 candidates freetype debug: Building font databases. freetype debug: Took 0 microseconds freetype debug: Using Serif Bold as font from file /usr/share/fonts/truetype/ttf-dejavu/DejaVuSans.ttf freetype debug: using fontsize: 2 main debug: using text renderer module "freetype" main debug: TIMER module_need() : 4.552 ms - Total 4.552 ms / 1 intvls (Avg 4.552 ms) main debug: using visualization2 module "visual" main debug: TIMER module_need() : 84.104 ms - Total 84.104 ms / 1 intvls (Avg 84.104 ms) main debug: filter(s) 'f32l'->'f32l' 44100 Hz->44100 Hz Stereo->Stereo main debug: conversion pipeline completed main debug: filter(s) 'f32l'->'f32l' 44100 Hz->44100 Hz Stereo->Stereo main debug: conversion pipeline completed main debug: filter(s) 'f32l'->'f32l' 48510 Hz->44100 Hz Stereo->Stereo main debug: looking for audio filter module: 14 candidates main debug: using audio filter module "samplerate" main debug: TIMER module_need() : 0.375 ms - Total 0.375 ms / 1 intvls (Avg 0.375 ms) main debug: conversion pipeline completed main debug: End of audio preroll main debug: Decoder buffering done in 91 ms main warning: PTS is out of range (-9269), dropping buffer pulse debug: deferring start (190703 us) main debug: looking for video blending module: 1 candidate main debug: using video blending module "blend" main debug: TIMER module_need() : 0.275 ms - Total 0.275 ms / 1 intvls (Avg 0.275 ms) main debug: Detected interlaced video main debug: deinterlace 0, mode blend, is_needed 1 xcb_xv debug: display is visible pulse debug: starting deferred pulse warning: too late by 93760 us pulse debug: changed sample rate to 44186 Hz pulse debug: started pulse warning: too late by 94474 us pulse debug: changed sample rate to 44229 Hz pulse warning: too late by 93532 us pulse debug: changed sample rate to 44272 Hz pulse warning: too late by 92829 us pulse debug: changed sample rate to 44315 Hz pulse warning: too late by 92132 us pulse debug: changed sample rate to 44358 Hz xcb_xv debug: display is visible pulse warning: too late by 91534 us pulse debug: changed sample rate to 44401 Hz xcb_xv debug: display is visible pulse warning: too late by 89482 us pulse debug: changed sample rate to 44440 Hz xcb_xv debug: display is visible xcb_xv debug: display is visible pulse warning: too late by 87529 us pulse debug: changed sample rate to 44479 Hz pulse warning: too late by 84577 us pulse debug: changed sample rate to 44504 Hz main debug: auto hiding mouse cursor pulse warning: too late by 78562 us pulse debug: changed sample rate to 44492 Hz pulse warning: too late by 68015 us pulse debug: changed sample rate to 44422 Hz xcb_xv debug: display is visible xcb_xv debug: display is visible xcb_xv debug: display is visible xcb_xv debug: display is visible main debug: auto hiding mouse cursor pulse debug: changed sample rate to 44336 Hz xcb_xv debug: display is visible xcb_xv debug: display is visible xcb_xv debug: display is visible main debug: auto hiding mouse cursor I have had issues with VLC in the past- the audio quality was extremely crackly, as if the headphone jack was plugged in only half way, and the sounds were extremely sharp and caused my speakers to make a ringing/vibrating noise... It would eventually start working after I messed around with the audio settings, but it happened every restart. I eventually switched to SMPlayer, but now I need some of the features that VLC offers, but I still can't use VLC. At this point, the audio can not be heard at all, and the method I used before, messing around with the audio settings, isn't getting me anywhere. (note, I reposted this on VideoLan's forums, link is here: http://forum.videolan.org/viewtopic.php?f=13&t=104726) Please let me know if you need more information, or are confused by something I posted! Thanks!

    Read the article

  • The future for Microsoft

    - by Scott Dorman
    Originally posted on: http://geekswithblogs.net/sdorman/archive/2013/10/16/the-future-for-microsoft.aspxMicrosoft is in the process of reinventing itself. While some may argue that it’s “too little, too late” or that their growing consumer-focused strategy is wrong, the truth of the situation is that Microsoft is reinventing itself into a new company. While Microsoft is now calling themselves a “devices and services” company, that’s not entirely accurate. Let’s look at some facts: Microsoft will always (for the long-term foreseeable future) be financially split into the following divisions: Windows/Operating Systems, which for FY13 made up approximately 24% of overall revenue. Server and Tools, which for FY13 made up approximately 26% of overall revenue. Enterprise/Business Products, which for FY13 made up approximately 32% of overall revenue. Entertainment and Devices, which for FY13 made up approximately 13% of overall revenue. Online Services, which for FY13 made up approximately 4% of overall revenue. It is important to realize that hardware products like the Surface fall under the Windows/Operating Systems division while products like the Xbox 360 fall under the Entertainment and Devices division. (Presumably other hardware, such as mice, keyboards, and cameras, also fall under the Entertainment and Devices division.) It’s also unclear where Microsoft’s recent acquisition of Nokia’s handset division will fall, but let’s assume that it will be under Entertainment and Devices as well. Now, for the sake of argument, let’s assume a slightly different structure that I think is more in line with how Microsoft presents itself and how the general public sees it: Consumer Products and Devices, which would probably make up approximately 9% of overall revenue. Developer Tools, which would probably make up approximately 13% of overall revenue. Enterprise Products and Devices, which would probably make up approximately 47% of overall revenue. Entertainment, which would probably make up approximately 13% of overall revenue. Online Services, which would probably make up approximately 17% of overall revenue. (Just so we’re clear, in this structure hardware products like the Surface, a portion of Windows sales, and other hardware fall under the Consumer Products and Devices division. I’m assuming that more of the income for the Windows division is coming from enterprise/volume licenses so 15% of that income went to the Enterprise Products and Devices division. Most of the enterprise services, like Azure, fall under the Online Services division so half of the Server and Tools income went there as well.) No matter how you look at it, the bulk of Microsoft’s income still comes from not just the enterprise but also software sales, and this really shouldn’t surprise anyone. So, now that the stage is set…what’s the future for Microsoft? The future I see for Microsoft (again, this is just my prediction based on my own instinct, gut-feel and publicly available information) is this: Microsoft is becoming a consumer-focused enterprise company. Let’s look at it a different way. Microsoft is an enterprise-focused company trying to create a larger consumer presence.  To a large extent, this is the exact opposite of Apple, who is really a consumer-focused company trying to create a larger enterprise presence. The major reason consumer-focused companies (like Apple) have started making in-roads into the enterprise is the “bring your own device” phenomenon. Yes, Apple has created some “game-changing” products but their enterprise influence is still relatively small. Unfortunately (for this blog post at least), Apple provides revenue in terms of hardware products rather than business divisions, so it’s not possible to do a direct comparison. However, in the interest of transparency, from Apple’s Quarterly Report (filed 24 July 2013), their revenue breakdown is: iPhone, which for the 3 months ending 29 June 2013 made up approximately 51% of revenue. iPad, which for the 3 months ending 29 June 2013 made up approximately 18% of revenue. Mac, which for the 3 months ending 29 June 2013 made up approximately 14% of revenue. iPod, which for the 3 months ending 29 June 2013 made up approximately 2% of revenue. iTunes, Software, and Services, which for the 3 months ending 29 June 2013 made up approximately 11% of revenue. Accessories, which for the 3 months ending 29 July 2013 made up approximately 3% of revenue. From this, it’s pretty clear that Apple is a consumer-and-hardware-focused company. At this point, you may be asking yourself “Where is all of this going?” The answer to that lies in Microsoft’s shift in company focus. They are becoming more consumer focused, but what exactly does that mean? The biggest change (at least that’s been in the news lately) is the pending purchase of Nokia’s handset division. This, in combination with their Surface line of tablets and the Xbox, will put Microsoft squarely in the realm of a hardware-focused company in addition to being a software-focused company. That can (and most likely will) shift the revenue split to looking at revenue based on software sales (both consumer and enterprise) and also hardware sales (mostly on the consumer side). If we look at things strictly from a Windows perspective, Microsoft clearly has a lot of irons in the fire at the moment. Discounting the various product SKUs available and painting the picture with broader strokes, there are currently 5 different Windows-based operating systems: Windows Phone Windows Phone 7.x, which runs on top of the Windows CE kernel Windows Phone 8.x+, which runs on top of the Windows 8 kernel Windows RT The ARM-based version of Windows 8, which runs on top of the Windows 8 kernel Windows (Pro) The Intel-based version of Windows 8, which runs on top of the Windows 8 kernel Xbox The Xbox 360, which runs it’s own proprietary OS. The Xbox One, which runs it’s own proprietary OS, a version of Windows running on top of the Windows 8 kernel and a proprietary “manager” OS which manages the other two. Over time, Windows Phone 7.x devices will fade so that really leaves 4 different versions. Looking at Windows RT and Windows Phone 8.x paints an interesting story. Right now, all mobile phone devices run on some sort of ARM chip and that doesn’t look like it will change any time soon. That means Microsoft has two different Windows based operating systems for the ARM platform. Long term, it doesn’t make sense for Microsoft to continue supporting that arrangement. I have long suspected (since the Surface was first announced) that Microsoft will unify these two variants of Windows and recent speculation from some of the leading Microsoft watchers lends credence to this suspicion. It is rumored that upcoming Windows Phone releases will include support for larger screen sizes, relax the requirement to have a hardware-based back button and will continue to improve API parity between Windows Phone and Windows RT. At the same time, Windows RT will include support for smaller screen sizes. Since both of these operating systems are based on the same core Windows kernel, it makes sense (both from a financial and development resource perspective) for Microsoft to unify them. The user interfaces are already very similar. So similar in fact, that visually it’s difficult to tell them apart. To illustrate this, here are two screen captures: Other than a few variations (the Bing News app, the picture shown in the Pictures tile and the spacing between the tiles) these are identical. The one on the left is from my Windows 8.1 laptop (which looks the same as on my Surface RT) and the one on the right is from my Windows Phone 8 Lumia 925. This pretty clearly shows that from a consumer perspective, there really is no practical difference between how these two operating systems look and how you interact with them. For the consumer, your entertainment device (Xbox One), phone (Windows Phone) and mobile computing device (Surface [or some other vendors tablet], laptop, netbook or ultrabook) and your desktop computing device (desktop) will all look and feel the same. While many people will denounce this consistency of user experience, I think this will be a good thing in the long term, especially for the upcoming generations. For example, my 5-year old son knows how to use my tablet, phone and Xbox because they all feature nearly identical user experiences. When Windows 8 was released, Microsoft allowed a Windows Store app to be purchased once and installed on as many as 5 devices. With Windows 8.1, this limit has been increased to over 50. Why is that important? If you consider that your phone, computing devices, and entertainment device will be running the same operating system (with minor differences related to physical hardware chipset), that means that I could potentially purchase my sons favorite Angry Birds game once and be able to install it on all of the devices I own. (And for those of you wondering, it’s only 7 [at the moment].) From an app developer perspective, the story becomes even more compelling. Right now there are differences between the different operating systems, but those differences are shrinking. The user interface technology for both is XAML but there are different controls available and different user experience concepts. Some of the APIs available are the same while some are not. You can’t develop a Windows Phone app that can also run on Windows (either Windows Pro or RT). With each release of Windows Phone and Windows RT, those difference become smaller and smaller. Add to this mix the Xbox One, which will also feature a Windows-based operating system and the same “modern” (tile-based) user interface and the visible distinctions between the operating systems will become even smaller. Unifying the operating systems means one set of APIs and one code base to maintain for an app that can run on multiple devices. One code base means it’s easier to add features and fix bugs and that those changes become available on all devices at the same time. It also means a single app store, which will increase the discoverability and reach of your app and consolidate revenue and app profile management. Now, the choice of what devices an app is available on becomes a simple checkbox decision rather than a technical limitation. Ultimately, this means more apps available to consumers, which is always good for the app ecosystem. Is all of this just rumor, speculation and conjecture? Of course, but it’s not unfounded. As I mentioned earlier, some of the prominent Microsoft watchers are also reporting similar rumors. However, Microsoft itself has even hinted at this future with their recent organizational changes and by telling developers “if you want to develop for Xbox One, start developing for Windows 8 now.” I think this pretty clearly paints the following picture: Microsoft is committed to the “modern” user interface paradigm. Microsoft is changing their release cadence (for all products, not just operating systems) to be faster and more modular. Microsoft is going to continue to unify their OS platforms both from a consumer perspective and a developer perspective. While this direction will certainly concern some people it will excite many others. Microsoft’s biggest failing has always been following through with a strong and sustained marketing strategy that presents a consistent view point and highlights what this unified and connected experience looks like and how it benefits consumers and enterprises. We’ve started to see some of this over the last few years, but it needs to continue and become more aggressive and consistent. In the long run, I think Microsoft will be able to pull all of these technologies and devices together into one seamless ecosystem. It isn’t going to happen overnight, but my prediction is that we will be there by the end of 2016. As both a consumer and a developer, I, for one, am excited about the future of Microsoft.

    Read the article

  • CodePlex Daily Summary for Saturday, March 27, 2010

    CodePlex Daily Summary for Saturday, March 27, 2010New ProjectsAlter gear SQL index Management: SQL Index management displays a list of indexes available for the chosen database and allows you to select an individual / group of indexes to be r...ASP League Ladder System: An ASP ladder / league system for online gaming league or real life leagues also.Augmented Reality Strategy Simulator: Augmented Reality Strategy Simulator is a software suite to promote computer aided strategy planning. Sports team can visualize their strategy usin...Boo syntax highlighting for Visual Studio 2010: Simple syntax hightlighting VSX add-in for Boo language in Visual Studio 2010.easySan: easySan zur einfachen Mitgliedsverwaltung im BRKFsUnit: FsUnit makes unit-testing with F# more enjoyable. It adds a special syntax to your favorite .NET testing framework.Laughing Dog XNA Framework: Laughing Dog is a simple to use, component based 2D framework for XNA game development. At present it is very early in development and as such is f...miniTodo: WPFでMVVMの練習にてきとうに作ったTODOアプリ 実用は無理です。My Common Library on .NET with CSharp: My Common Library on .NET with CSharp, it conclude database assecc, encrypt string, data caching, StringUtility, thank you for your view.Native code wrapping using c# : fsutil sparse commands: Ever thought about creating HUGE FILES for future use but felt bad for the wasted memory? Well, SPARSE FILES are the ANSWER! This FSUTIL SPARSE CO...Open SOA Platform: A centralized system for administering applications throught a SOA Enterprise Service Bus: Runtime environment (PROD, DEV, ...) , application and s...P-DBMS: Network and Database ProjectPraiseSight: PraiseSight is supposed to become a practical tool for churches to catalog an present their songs, lyrics and presentations on a beamer. The soluti...Pretty Good Frontend: Pretty Good Frontend is a sample frontend for ConfigMgr (SCCM) 2007 and MDT 2010 Zero Touch. S3Appender (Appender for Log4Net that Uses Amazon S3 For Storing Log Files): The S3Appender is a log4net appender that stores log events in either a MemoryStream or FileStream and sends them to S3 based on time intervals and...sEmit: sEmit (sms emitter) is an application written in C# which was built to send text messages. The project was founded in May 2009 by cansik. It works ...Silverlight RIA Tools: A tool set that generates a full RIA Solutions in Silverlightthommo cannon: Cannon for shooting down ThommosTianjin Polytechnic University Online Judge: Online Judge System Built on Microsoft technologies. Vision & Scope: A distributed OJ Solution on Windows and Cloud. Technologies used or planed...Tinare: Tinare is an byte encryption and decryption alogrithm. The input key is a string password.TinyPlug: Small Plugin Manager, written in C# Allows a project to define supported interfaces, and at runtime add plugins which support (inherit) these in...Utility niconv helps to convert text from one encoding to another: .NET implementation of GUN iconv console converter utility. The niconv program converts text from one encoding to another encoding. In the future r...WareFeed - Software Business Analytics: WareFeed is a simple but effective Software Business Analytics tool written in PHP and compatible others languages such as .NET, Java or Python. It...Y36API1: Semestralni projekt na Y36APINew ReleasesAlter gear SQL index Management: Setup 1.0.0: setup for first alpha releaseASP League Ladder System: ASPLeagueRelease_0_4_1: Release v 0.41Augmented Reality Strategy Simulator: Augmented Reality Strategy Simulator: Version 1.0 InstallerAutoAudit: AutoAudit 1.10e: Version 1.10e will be the final iteration of version 1 development. Version 2 will begin adding switches and options. Pleae email your suggestio...Boo syntax highlighting for Visual Studio 2010: Boo syntax VS 2010 - alpha: First release TODO: Multiline comments!Chargify.NET: Chargify.NET 0.6: Updated library, using Metered Components and updated Product information.Composer: V1.0.326.1000 Alpha: Initial Alpha release. Should be stable, with minor issues.CoNatural Components: CoNatural Components 1.6: Code fixes: Created helper classes to generate source code for type mapper/materializer. Fixed issue in optimized type materializer when loading ...CRM External View: 1.2: New Features in v1.2 release Password protected views. No more using Web Data Access role from v1. Filtering capabilities Caching for performan...Designit Video Embed Package: Release 1.1.0 beta1: You can now either have the video embeded directly in the template or have a preview in template that opens the video in a lightbox window.FsUnit: FsUnit 0.9.0 for NUnit: This release is for F# 2.0 and NUnit 2.5+.Laughing Dog XNA Framework: Laughing Dog 0.0.1: Laughing Dog - Alpla - v 0.0.1 First released version of the Laughing Dog framework.LiveUpload to Facebook: LiveUpload to Facebook 3.2: Version 3.2Become a fan on Facebook! Features Quickly and easily upload your photos and videos to Facebook, including any people tags added in Win...MapWindow6: MapWindow 6.0 msi March 26: This version adds the Join feature for creating a new "featureset" with attributes that are joined with attributes from a Excel data label named 'D...Mobile Broadband Logging Monitor: Mobile Broadband Logging Monitor 1.2.2: This edition supports: Newer and older editions of Birdstep Technology's EasyConnect HUAWEI Mobile Partner MWConn User defined location for s...Multiplayer Quiz: Release 1_6_351_0: A beta release of the next version. Please leave any errors in discussions or comments.Native code wrapping using c# : fsutil sparse commands: Fsutil sparse file native code - c sharp wrapper: Project Description A C# code wrapping a native code-Sparse files1 The code is about SPARSE files- the abillity to create huge files (for future us...Nice Libraries: 1.30 build 50325.01: Release 1.30 build 50325.01Pretty Good Frontend: Pretty Good Frontend binaries v1.0: This is the first public release of the Pretty Good Frontend binariesPylor: Pylor 0.1 alpha: This is the very first published version. I hope I can put a sample project soon.Quick Performance Monitor: Version 1.1 refresh: There was a typo or two in the sample batch file. Corrected now.Rapidshare Episode Downloader: RED v0.8.3: 0.8.1 introduced the ability to advance to the next episode. In 0.8.2 a bug was found that if episode number is less then 10, then the preceding 0...RapidWebDev - .NET Enterprise Software Development Infrastructure: RapidWebDev 1.52: RapidWebDev is an infrastructure helps to develop enterprise software solutions in Microsoft .NET easily and productively. This is the release vers...thommo cannon: game: gamethommo cannon: setup: setupthommo cannon: test: testTinare: Tinare DLL: Tinare DLL is a dynamic-link library written in C# which provides the functions to encrypt and decrypt a byte stream with tinare.WeatherBar: WeatherBar 2.1 [No Installation]: Minor changes to release 2.0 (http://weatherbar.codeplex.com/releases/view/42490). Fixed the bug that caused an exception to be thrown if the user...Most Popular ProjectsMetaSharpRawrWBFS ManagerASP.NET Ajax LibrarySilverlight ToolkitMicrosoft SQL Server Product Samples: DatabaseAJAX Control ToolkitLiveUpload to FacebookWindows Presentation Foundation (WPF)ASP.NETMost Active ProjectsRawrjQuery Library for SharePoint Web ServicesBlogEngine.NETMicrosoft Biology FoundationFarseer Physics Enginepatterns & practices: Composite WPF and SilverlightLINQ to TwitterTable2ClassFluent Ribbon Control SuiteNB_Store - Free DotNetNuke Ecommerce Catalog Module

    Read the article

  • How did I get here? My route to Android, iPhone, Windows Phone 7, and interest in Mobile Devices

    - by Wallym
    I get asked all the time how/why I got interested in mobile and jumped on this fairly early.  I tend to give half answers because it wasn't just one thing that took me to mobile, but a whole host of separate ivents culminating in a specific event where I wasdoing market research in May/June 2008.  Let me throw out the events and the facts about me: I tend to like new, different, cool stuff.  I jumped on .NET early on.  I jumped on Ajax early on.  I don't jump on every new technology that comes down the road, I'm probably the only person on the planet that doesn't "get" MVC, though I acknowledge that a lot of people do and it solves a number of problems in the default settings of ASP.NET WebForms. I remember buying an early Windows CE device. It was interesting, but dang, this stylus thing sucks. After I lost my third stylus, i just gave up.  I got my first mobile phone in early 1999.  Reception was crappy, but I could see the value in being mobile. In 1999, I worked on a manufacturing systems project.  One piece of the projects was a set of handheld devices on the shop floor.  While the UI was a crappy DOS based, yes I said DOS as in Disk Operating System Version 6.22, I could see that the wireless world was a direction I wanted to be in. In 2000, Microsoft released the first public alpha of .NET.  Very cool stuff indeed.  One piece of the puzzle was a set of mobile controls for ASP.NET.  I build numerous test apps as well as mobile version using these mobile controls.  Now, the mobile UIs of the time were based on WML, which was crap. I could real all the analysis of mobile and read all about growth rates.  Now, you have to realize that growth rates can be impressive when dealing with small numbers, but I knew it was a comer. In our first book, I got talked out of mobile because of the line from the publisher "Wally, mobile doesn't sell." Blackberry was the dominant device of the mid 2000s.  Its users were referred to as "Crackberry addicts."  Unfortunately, the mobile development experience for native apps was crap and the web experience was fairly rough as well, but if they could get the ecosystem started, other phones and better blackberryies would come out.  I finally jumped into using a blackberry. Sometime around 2006, I heard "Wally, mobile doesn't sell" again.  Now, anyone that knows me knows that someone saying something like this to me means I'll keep trying it. The phones of the mid 2000s were moving to be more graphical, but there were too many that had this idea that they had to use a stylus.  Stylus suck.  They get lost too easily. I worked on a project in 2007 and 2008 for a startup trying to answer the question of "What is there to do where I am at?"  For some reason, they wanted to be tied to PCs.  As it became obvious that they were having problems, their investor asked us to do some market research and to figure out what the marketplace did want.  One of the important things that I figured out was the we lived in a mobile world and if you had a mobile app, it need to be on a mobile device, not tied to a desktop/laptop/netbook device.  If there was any single event, this was it - I was doing some market research and sat and talked to people in a bar/restaurant in Atlanta called "The Grove" on Lavista.  The consensus of the people that I talked to was that they wanted their data where ever they were at, laptop, pc, mobile, whereever. In 2007, Apple released the iPhone.  Wow, what an impressive device, even with all the problems of a 1st generation device.  I bought an iPod Touch 1st generation to understand touch better, one of the best decisions I ever made. I decided in late 2008, to make a move into cloud, for a number of reasons.  I was working on an example app.  In April, 2009, one of my friends at Microsoft said "don't mention my name with this, but you need an iPhone front end for this app."  How do you get on the iPhone.  Well, there are a number of ways including: ObjectiveC.  Its hard to teach an old dog new tricks, and this dog knows .NET, not ObjectiveC. HTML, web, javascript optimized interface.  yeah, this is possible. PhoneGap.  Now, this is interesting, take an html interface and get it to run on the iPhone, Android, Blackberry, and other platforms.  I thought that this way made the most sense for me until......... MonoTouch.  In May/June 2009, Novell announced a way for .NET/c# developers to write apps for the iPhone.  This is the way that made the most sense to me. Titanium by Appcelerator.  This is similar in concept to PhoneGap.  I haven't played with this much but do want to learn more about it. In July, 2009, I emailed one of my contacts at Wrox to see if they would be interested in a short MonoTouch ebook in their Wrox Blox format.  I fully expected another  response along the lines of "Wally, mobile doesn't sell."  The response I got was "Wally, iPhone is H O T, get started immediately, can you have this to me before Labor Day."  Not quite the response I expected.  Thankfully, we didn't make the Labor Day, first draft date. I kept pushing back because I had a feeling that things were not going to be quite as polished and feature rich as necessary.  After all, Novell doesn't have the resouces of Microsoft's developer division. The ebook shipped on November 30, 2009. On about December, 15, 2009, my editor emailed and said "Your ebook is selling really well, lets do a full book and it by March 1 so get started."  Thankfully, guys like Craig Dunn and Chris Hardy were interested along with Martin and Ror joinged us later on. I bought my wife an iPhone 3Gs in early 2010 to go along with all my iPod Touch devices. I tried to pretend in 2010 that I wasn't that interested in mobile and still had interest in the desktop technologies.  I love the technologies and continue to use them today, but that isn't where my interest is right now.  I'm just about all mobile all the time with my energies.  Our book shipped in the beginning of July, 2010 right in the middle of the Apple FUD.I've been looking at Mobile Web as a way around the AppStores and Apple FUD problems of 2010. With all the Apple self FUD, we became interested in Android.I went up to Dino Esposito at DevConnections in Las Vegas at introduced myself. I've always tried to keep up with what Dino has been doing. I was shocked, he wanted to meet me.  We must have talked for 1.5 hours. It was way more time than I deserved. If you get a chance, go and introduce yourself to Dino. He's a great guy. Microsoft released Windows Phone 7 in the Fall of 2010.  I'm not doing development on that platform at this time.  I think they have a very interesting user interface.  The devices are being positively reviewed.  For my purposes, the devices are limited at this point in time.  We'll see what 2011 brings as far as updates to the operating system.  I need multitasking/background processing and html5 in the browser. Add that as well as acceptance in the marketplace and I'll be more interested in the device. Obviosuly, I'm now working on a MonoDroid book . I own Android and iPhone/iOS devices.  I am currently working on some startup ideas and am exploring as much in that area as I can. For 2011, I'm planning on speaking at Android Developer's Conference (AnDevCon) and Mobile Connections.  I'm really excited about this. I have a couple of magazine articles coming out in 2011 on Android and iPhone development with the Mono technologies.is Mono "The Answer"? What's "The Question?" I think it will work for me.  It might work for you, it might not.  it depends on your situation.  Its the current horse that I am riding. I might find a better horse tomorrow. So, that's how I got here.  I'm in love with mobile.  Mobile native apps on the device as well as mobile web.  I'm into all this cool stuff.  Where are you at?

    Read the article

  • Craftsmanship is ALL that Matters

    - by Wayne Molina
    Today, I'm going to talk about a touchy subject: the notion of working in a company that doesn't use the prescribed "best practices" in its software development endeavours.  Over the years I have, using a variety of pseudonyms, asked this question on popular programming forums.  Although I always add in some minor variation of the story to avoid suspicion that it's the same person posting, the crux of the tale remains the same: A Programmer’s Tale A junior software developer has just started a new job at an average company, creating average line-of-business applications for internal use (the most typical scenario programmers find themselves in).  This hypothetical newbie has spent a lot of time reading up on the "theory" of software development, devouring books, blogs and screencasts from well-known and respected software developers in the community in order to broaden his knowledge and "do what the pros do".  He begins his new job, eager to apply what he's learned on a real-world project only to discover that his new teammates doesn't use any of those concepts and techniques.  They hack their way through development, or in a best-case scenario use some homebrew, thrown-together semblance of a framework for their applications that follows not one of the best practices suggested by the “elite” in the software community - things like TDD (TDD as a "best practice" is the only subjective part of this post, but it's included here due to a very large following of respected developers who consider it one), the SOLID principles, well-known and venerable tools, even version control in a worst case and truly nightmarish scenario.  Our protagonist is frustrated that he isn't doing things the "proper" way - a way he's spent personal time digesting and learning about and, more importantly, a way that some of the top developers in the industry advocate - and turns to a forum to ask the advice of his peers. Invariably the answer I, in the guise of the concerned newbie, will receive is that A) I don't know anything and should just shut my mouth and sling code the bad way like everybody else on the team, and B) These "best practices" are fade or a joke, and the only thing that matters is shipping software to your customers. I am here today to say that anyone who says this, or anything like it, is not only full of crap but indicative of exactly the type of “developer” that has helped to give our industry a bad name.  Here is why: One Who Knows Nothing, Understands Nothing On one hand, you have the cognoscenti of the .NET development world.  Guys like James Avery, Jeremy Miller, Ayende Rahien and Rob Conery; all well-respected and noted programmers that are pretty much our version of celebrities.  These guys write blogs, books, and post videos outlining the "correct" way of writing software to make sure it not only works but is maintainable and extensible and a joy to work with.  They tout the virtues of the SOLID principles, or of using TDD/BDD, or using a mature ORM like NHibernate, Subsonic or even Entity Framework. On the other hand, you have Joe Everyman, Lead Software Developer at Initrode Corporation - in our hypothetical story Joe is the junior developer's new boss.  Joe's been with Initrode for 10 years, starting as the company’s very first programmer and over the years building up a little fiefdom of his own until at the present he’s in charge of all Initrode’s software development.  Joe writes code the same way he always has, without bothering to learn much, if anything.  He looked at NHibernate once and found it was "too hard", so he uses a primitive implementation of the TableDataGateway pattern as a wrapper around SqlClient.SqlConnection and SqlClient.SqlCommand instead of an actual ORM (or, in a better case scenario, has created his own ORM); the thought of using LINQ or Entity Framework or really anything other than his own hastily homebrew solution has never occurred to him.  He doesn't understand TDD and considers “testing” to be using the .NET debugger to step through code, or simply loading up an app and entering some values to see if it works.  He doesn't really understand SOLID, and he doesn't care to.  He's worked as a programmer for years, and that's all that counts.  Right?  WRONG. Who would you rather trust?  Someone with years of experience and who writes books, creates well-known software and is akin to a celebrity, or someone with no credibility outside their own minute environment who throws around their clout and company seniority as the "proof" of their ability?  Joe Everyman may have years of experience at Initrode as a programmer, and says to do things "his way" but someone like Jeremy Miller or Ayende Rahien have years of experience at companies just like Initrode, THEY know ten times more than Joe Everyman knows or could ever hope to know, and THEY say to do things "this way". Here's another way of thinking about it: If you wanted to get into politics and needed advice on the best way to do it, would you rather listen to the mayor of Hicktown, USA or Barack Obama?  One is a small-time nobody while the other is very well-known and, as such, would probably have much more accurate and beneficial advice. NOTE: The selection of Barack Obama as an example in no way, shape, or form suggests a political affiliation or political bent to this post or blog, and no political innuendo should be mistakenly read from it; the intent was merely to compare a small-time persona with a well-known persona in a non-software field.  Feel free to replace the name "Barack Obama" with any well-known Congressman, Senator or US President of your choice. DIY Considered Harmful I will say right now that the homebrew development environment is the WORST one for an aspiring programmer, because it relies on nothing outside it's own little box - no useful skill outside of the small pond.  If you are forced to use some half-baked, homebrew ORM created by your Director of Software, you are not learning anything valuable you can take with you in the future; now, if you plan to stay at Initrode for 10 years like Joe Everyman, this is fine and dandy.  However if, like most of us, you want to advance your career outside a very narrow space you will do more harm than good by sticking it out in an environment where you, to be frank, know better than everybody else because you are aware of alternative and, in almost most cases, better tools for the job.  A junior developer who understands why the SOLID principles are good to follow, or why TDD is beneficial, or who knows that it's better to use NHibernate/Subsonic/EF/LINQ/well-known ORM versus some in-house one knows better than a senior developer with 20 years experience who doesn't understand any of that, plain and simple.  Anyone who disagrees is either a liar, or someone who, just like Joe Everyman, Lead Developer, relies on seniority and tenure rather than adapting their knowledge as things evolve. In many cases, the Joe Everymans of the world act this way out of fear - they cannot possibly fathom that a “junior” could know more than them; after all, they’ve spent 10 or more years in the same company, doing the same job, cranking out the same shoddy software.  And here comes a newbie who hasn’t spent 10+ years doing the same things, with a fresh and often radical take on the craft, and Joe Everyman is afraid he might have to put some real effort into his career again instead of just pointing to his 10 years of service at Initrode as “proof” that he’s good, or that he might have to learn something new to improve; in most cases the problem is Joe Everyman, and by extension Initrode itself, has a mentality of just being “good enough”, and mediocrity is the rule of the day. A Thorn Bush is No Place for a Phoenix My advice is that if you work on a team where they don't use the best practices that some of the most famous developers in our field say is the "right" way to do things (and have legions of people who agree), and YOU are aware of these practices and can see why they work, then LEAVE the company.  Find a company where they DO care about quality, and craftsmanship, otherwise you will never be happy.  There is no point in "dumbing" yourself down to the level of your co-workers and slinging code without care to craftsmanship.  In 95% of these situations there will be no point in bringing it to the attention of Joe Everyman because he won't listen; he might even get upset that someone is trying to "upstage" him and fire the newbie, and replace someone with loads of untapped potential with a drone that will just nod affirmatively and grind out the tasks assigned without question. Find a company that has people smart enough to listen to the "best and brightest", and be happy.  Do not, I repeat, DO NOT waste away in a job working for ignorant people.  At the end of the day software development IS a craft, and a level of craftsmanship is REQUIRED for any serious professional.  When you have knowledgeable people with the credibility to back it up saying one thing, and small-time people who are, to put it bluntly, nobodies in the field saying and doing something totally different because they can't comprehend it, leave the nobodies to their own devices to fade into obscurity.  Work for a company that uses REAL software engineering techniques and really cares about craftsmanship.  The biggest issue affecting our career, and the reason software development has never been the respected, white-collar career it was meant to be, is because hacks and charlatans can pass themselves off as professional programmers without following a lick of good advice from programmers much better at the craft than they are.  These modern day snake-oil salesmen entrench themselves in companies by hoodwinking non-technical businesspeople and customers with their shoddy wares, end up in senior/lead/executive positions, and push their lack of knowledge on everybody unfortunate enough to work with/for/under them, crushing any dissent or voices of reason and change under their tyrannical heel and leaving behind a trail of dismayed and, often, unemployed junior developers who were made examples of to keep up the facade and avoid the shadow of doubt being cast upon them. To sum this up another way: If you surround yourself with learned people, you will learn.  Surround yourself with ignorant people who can't, as the saying goes, see the forest through the trees, and you'll learn nothing of any real value.  There is more to software development than just writing code, and the end goal should not be just "shipping software", it should be shipping software that is extensible, maintainable, and above all else software whose creation has broadened your knowledge in some capacity, even if a minor one.  An eager newbie who knows theory and thirsts for knowledge can easily be moulded and taught the advanced topics, but the same can't be said of someone who only cares about the finish line.  This industry needs more people espousing the benefits of software craftsmanship and proper software engineering techniques, and less Joe Everymans who are unwilling to adapt or foster new ways of thinking. Conclusion - I Cast “Protection from Fire” I am fairly certain this post will spark some controversy and might even invite the flames.  Please keep in mind these are opinions and nothing more.  A little healthy rant and subsequent flamewar can be good for the soul once in a while.  To paraphrase The Godfather: It helps to get rid of the bad blood.

    Read the article

  • Optimizing Solaris 11 SHA-1 on Intel Processors

    - by danx
    SHA-1 is a "hash" or "digest" operation that produces a 160 bit (20 byte) checksum value on arbitrary data, such as a file. It is intended to uniquely identify text and to verify it hasn't been modified. Max Locktyukhin and others at Intel have improved the performance of the SHA-1 digest algorithm using multiple techniques. This code has been incorporated into Solaris 11 and is available in the Solaris Crypto Framework via the libmd(3LIB), the industry-standard libpkcs11(3LIB) library, and Solaris kernel module sha1. The optimized code is used automatically on systems with a x86 CPU supporting SSSE3 (Intel Supplemental SSSE3). Intel microprocessor architectures that support SSSE3 include Nehalem, Westmere, Sandy Bridge microprocessor families. Further optimizations are available for microprocessors that support AVX (such as Sandy Bridge). Although SHA-1 is considered obsolete because of weaknesses found in the SHA-1 algorithm—NIST recommends using at least SHA-256, SHA-1 is still widely used and will be with us for awhile more. Collisions (the same SHA-1 result for two different inputs) can be found with moderate effort. SHA-1 is used heavily though in SSL/TLS, for example. And SHA-1 is stronger than the older MD5 digest algorithm, another digest option defined in SSL/TLS. Optimizations Review SHA-1 operates by reading an arbitrary amount of data. The data is read in 512 bit (64 byte) blocks (the last block is padded in a specific way to ensure it's a full 64 bytes). Each 64 byte block has 80 "rounds" of calculations (consisting of a mixture of "ROTATE-LEFT", "AND", and "XOR") applied to the block. Each round produces a 32-bit intermediate result, called W[i]. Here's what each round operates: The first 16 rounds, rounds 0 to 15, read the 512 bit block 32 bits at-a-time. These 32 bits is used as input to the round. The remaining rounds, rounds 16 to 79, use the results from the previous rounds as input. Specifically for round i it XORs the results of rounds i-3, i-8, i-14, and i-16 and rotates the result left 1 bit. The remaining calculations for the round is a series of AND, XOR, and ROTATE-LEFT operators on the 32-bit input and some constants. The 32-bit result is saved as W[i] for round i. The 32-bit result of the final round, W[79], is the SHA-1 checksum. Optimization: Vectorization The first 16 rounds can be vectorized (computed in parallel) because they don't depend on the output of a previous round. As for the remaining rounds, because of step 2 above, computing round i depends on the results of round i-3, W[i-3], one can vectorize 3 rounds at-a-time. Max Locktyukhin found through simple factoring, explained in detail in his article referenced below, that the dependencies of round i on the results of rounds i-3, i-8, i-14, and i-16 can be replaced instead with dependencies on the results of rounds i-6, i-16, i-28, and i-32. That is, instead of initializing intermediate result W[i] with: W[i] = (W[i-3] XOR W[i-8] XOR W[i-14] XOR W[i-16]) ROTATE-LEFT 1 Initialize W[i] as follows: W[i] = (W[i-6] XOR W[i-16] XOR W[i-28] XOR W[i-32]) ROTATE-LEFT 2 That means that 6 rounds could be vectorized at once, with no additional calculations, instead of just 3! This optimization is independent of Intel or any other microprocessor architecture, although the microprocessor has to support vectorization to use it, and exploits one of the weaknesses of SHA-1. Optimization: SSSE3 Intel SSSE3 makes use of 16 %xmm registers, each 128 bits wide. The 4 32-bit inputs to a round, W[i-6], W[i-16], W[i-28], W[i-32], all fit in one %xmm register. The following code snippet, from Max Locktyukhin's article, converted to ATT assembly syntax, computes 4 rounds in parallel with just a dozen or so SSSE3 instructions: movdqa W_minus_04, W_TMP pxor W_minus_28, W // W equals W[i-32:i-29] before XOR // W = W[i-32:i-29] ^ W[i-28:i-25] palignr $8, W_minus_08, W_TMP // W_TMP = W[i-6:i-3], combined from // W[i-4:i-1] and W[i-8:i-5] vectors pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) movdqa W, W_TMP // 4 dwords in W are rotated left by 2 psrld $30, W // rotate left by 2 W = (W >> 30) | (W << 2) pslld $2, W_TMP por W, W_TMP movdqa W_TMP, W // four new W values W[i:i+3] are now calculated paddd (K_XMM), W_TMP // adding 4 current round's values of K movdqa W_TMP, (WK(i)) // storing for downstream GPR instructions to read A window of the 32 previous results, W[i-1] to W[i-32] is saved in memory on the stack. This is best illustrated with a chart. Without vectorization, computing the rounds is like this (each "R" represents 1 round of SHA-1 computation): RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR With vectorization, 4 rounds can be computed in parallel: RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR Optimization: AVX The new "Sandy Bridge" microprocessor architecture, which supports AVX, allows another interesting optimization. SSSE3 instructions have two operands, a input and an output. AVX allows three operands, two inputs and an output. In many cases two SSSE3 instructions can be combined into one AVX instruction. The difference is best illustrated with an example. Consider these two instructions from the snippet above: pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) With AVX they can be combined in one instruction: vpxor W_minus_16, W, W_TMP // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) This optimization is also in Solaris, although Sandy Bridge-based systems aren't widely available yet. As an exercise for the reader, AVX also has 256-bit media registers, %ymm0 - %ymm15 (a superset of 128-bit %xmm0 - %xmm15). Can %ymm registers be used to parallelize the code even more? Optimization: Solaris-specific In addition to using the Intel code described above, I performed other minor optimizations to the Solaris SHA-1 code: Increased the digest(1) and mac(1) command's buffer size from 4K to 64K, as previously done for decrypt(1) and encrypt(1). This size is well suited for ZFS file systems, but helps for other file systems as well. Optimized encode functions, which byte swap the input and output data, to copy/byte-swap 4 or 8 bytes at-a-time instead of 1 byte-at-a-time. Enhanced the Solaris mdb(1) and kmdb(1) debuggers to display all 16 %xmm and %ymm registers (mdb "$x" command). Previously they only displayed the first 8 that are available in 32-bit mode. Can't optimize if you can't debug :-). Changed the SHA-1 code to allow processing in "chunks" greater than 2 Gigabytes (64-bits) Performance I measured performance on a Sun Ultra 27 (which has a Nehalem-class Xeon 5500 Intel W3570 microprocessor @3.2GHz). Turbo mode is disabled for consistent performance measurement. Graphs are better than words and numbers, so here they are: The first graph shows the Solaris digest(1) command before and after the optimizations discussed here, contained in libmd(3LIB). I ran the digest command on a half GByte file in swapfs (/tmp) and execution time decreased from 1.35 seconds to 0.98 seconds. The second graph shows the the results of an internal microbenchmark that uses the Solaris libpkcs11(3LIB) library. The operations are on a 128 byte buffer with 10,000 iterations. The results show operations increased from 320,000 to 416,000 operations per second. Finally the third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. The results show for 1 kernel thread, operations increased from 410 to 600 MBytes/second. For 8 kernel threads, operations increase from 1540 to 1940 MBytes/second. Availability This code is in Solaris 11 FCS. It is available in the 64-bit libmd(3LIB) library for 64-bit programs and is in the Solaris kernel. You must be running hardware that supports Intel's SSSE3 instructions (for example, Intel Nehalem, Westmere, or Sandy Bridge microprocessor architectures). The easiest way to determine if SSSE3 is available is with the isainfo(1) command. For example, nehalem $ isainfo -v $ isainfo -v 64-bit amd64 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu If the output also shows "avx", the Solaris executes the even-more optimized 3-operand AVX instructions for SHA-1 mentioned above: sandybridge $ isainfo -v 64-bit amd64 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu No special configuration or setup is needed to take advantage of this code. Solaris libraries and kernel automatically determine if it's running on SSSE3 or AVX-capable machines and execute the correctly-tuned code for that microprocessor. Summary The Solaris 11 Crypto Framework, via the sha1 kernel module and libmd(3LIB) and libpkcs11(3LIB) libraries, incorporated a useful SHA-1 optimization from Intel for SSSE3-capable microprocessors. As with other Solaris optimizations, they come automatically "under the hood" with the current Solaris release. References "Improving the Performance of the Secure Hash Algorithm (SHA-1)" by Max Locktyukhin (Intel, March 2010). The source for these SHA-1 optimizations used in Solaris "SHA-1", Wikipedia Good overview of SHA-1 FIPS 180-1 SHA-1 standard (FIPS, 1995) NIST Comments on Cryptanalytic Attacks on SHA-1 (2005, revised 2006)

    Read the article

  • Using Lightbox with _Screen

    Although, I have to admit that I discovered Bernard Bout's ideas and concepts about implementing a lightbox in Visual FoxPro quite a while ago, there was no "spare" time in active projects that allowed me to have a closer look into his solution(s). Luckily, these days I received a demand to focus a little bit more on this. This article describes the steps about how to integrate and make use of Bernard's lightbox class in combination with _Screen in Visual FoxPro. The requirement in this project was to be able to visually lock the whole application (_Screen area) and guide the user to an information that should not be ignored easily. Depending on the importance any current user activity should be interrupted and focus put onto the notification. Getting the "meat", eh, source code Please check out Bernard's blog on Foxite directly in order to get the latest and greatest version. As time of writing this article I use version 6.0 as described in this blog entry: The Fastest Lightbox Ever The Lightbox class is sub-classed from the imgCanvas class from the GdiPlusX project on VFPx and therefore you need to have the source code of GdiPlusX as well, and integrate it into your development environment. The version I use is available here: Release GDIPlusX 1.20 As soon as you open the bbGdiLightbox class the first it, VFP might ask you to update the reference to the gdiplusx.vcx. As we have the sources, no problem and you have access to Bernard's code. The class itself is pretty easy to understand, some properties that you do not need to change and three methods: Setup(), ShowLightbox() and BeforeDraw() The challenge - _Screen or not? Reading Bernard's article about the fastest lightbox ever, he states the following: "The class will only work on a form. It will not support any other containers" Really? And what about _Screen? Isn't that a form class, too? Yes, of course it is but nonetheless trying to use _Screen directly will fail. Well, let's have look at the code to see why: WITH This .Left = 0 .Top = 0 .Height = ThisForm.Height .Width = ThisForm.Width .ZOrder(0) .Visible = .F.ENDWITH During the setup of the lightbox as well as while capturing the image as replacement for your forms and controls, the object reference Thisform is used. Which is a little bit restrictive to my opinion but let's continue. The second issue lies in the method ShowLightbox() and introduced by the call of .Bitmap.FromScreen(): Lparameters tlVisiblilty* tlVisiblilty - show or hide (T/F)* grab a screen dump with controlsIF tlVisiblilty Local loCaptureBmp As xfcBitmap Local lnTitleHeight, lnLeftBorder, lnTopBorder, lcImage, loImage lnTitleHeight = IIF(ThisForm.TitleBar = 1,Sysmetric(9),0) lnLeftBorder = IIF(ThisForm.BorderStyle < 2,0,Sysmetric(3)) lnTopBorder = IIF(ThisForm.BorderStyle < 2,0,Sysmetric(4)) With _Screen.System.Drawing loCaptureBmp = .Bitmap.FromScreen(ThisForm.HWnd,; lnLeftBorder,; lnTopBorder+lnTitleHeight,; ThisForm.Width ,; ThisForm.Height) ENDWITH * save it to a property This.capturebmp = loCaptureBmp ThisForm.SetAll("Visible",.F.) This.DraW() This.Visible = .T.ELSE ThisForm.SetAll("Visible",.T.) This.Visible = .F.ENDIF My first trials in using the class ended in an exception - GdiPlusError:OutOfMemory - thrown by the Bitmap object. Frankly speaking, this happened mainly because of my lack of knowledge about GdiPlusX. After reading some documentation, especially about the FromScreen() method I experimented a little bit. Capturing the visible area of _Screen actually was not the real problem but the dimensions I specified for the bitmap. The modifications - step by step First of all, it is to get rid of restrictive object references on Thisform and to change them into either This.Parent or more generic into This.oForm (even better: This.oControl). The Lightbox.Setup() method now sets the necessary object reference like so: *====================================================================* Initial setup* Default value: This.oControl = "This.Parent"* Alternative: This.oControl = "_Screen"*====================================================================With This .oControl = Evaluate(.oControl) If Vartype(.oControl) == T_OBJECT .Anchor = 0 .Left = 0 .Top = 0 .Width = .oControl.Width .Height = .oControl.Height .Anchor = 15 .ZOrder(0) .Visible = .F. EndIfEndwith Also, based on other developers' comments in Bernard articles on his lightbox concept and evolution I found the source code to handle the differences between a form and _Screen and goes into Lightbox.ShowLightbox() like this: *====================================================================* tlVisibility - show or hide (T/F)* grab a screen dump with controls*====================================================================Lparameters tlVisibility Local loControl m.loControl = This.oControl If m.tlVisibility Local loCaptureBmp As xfcBitmap Local lnTitleHeight, lnLeftBorder, lnTopBorder, lcImage, loImage lnTitleHeight = Iif(m.loControl.TitleBar = 1,Sysmetric(9),0) lnLeftBorder = Iif(m.loControl.BorderStyle < 2,0,Sysmetric(3)) lnTopBorder = Iif(m.loControl.BorderStyle < 2,0,Sysmetric(4)) With _Screen.System.Drawing If Upper(m.loControl.Name) == Upper("Screen") loCaptureBmp = .Bitmap.FromScreen(m.loControl.HWnd) Else loCaptureBmp = .Bitmap.FromScreen(m.loControl.HWnd,; lnLeftBorder,; lnTopBorder+lnTitleHeight,; m.loControl.Width ,; m.loControl.Height) EndIf Endwith * save it to a property This.CaptureBmp = loCaptureBmp m.loControl.SetAll("Visible",.F.) This.Draw() This.Visible = .T. Else This.CaptureBmp = .Null. m.loControl.SetAll("Visible",.T.) This.Visible = .F. Endif {loadposition content_adsense} Are we done? Almost... Although, Bernard says it clearly in his article: "Just drop the class on a form and call it as shown." It did not come clear to my mind in the first place with _Screen, but, yeah, he is right. Dropping the class on a form provides a permanent link between those two classes, it creates a valid This.Parent object reference. Bearing in mind that the lightbox class can not be "dropped" on the _Screen, we have to create the same type of binding during runtime execution like so: *====================================================================* Create global lightbox component*==================================================================== Local llOk, loException As Exception m.llOk = .F. m.loException = .Null. If Not Vartype(_Screen.Lightbox) == "O" Try _Screen.AddObject("Lightbox", "bbGdiLightbox") Catch To m.loException Assert .F. Message m.loException.Message EndTry EndIf m.llOk = (Vartype(_Screen.Lightbox) == "O")Return m.llOk Through runtime instantiation we create a valid binding to This.Parent in the lightbox object and the code works as expected with _Screen. Ease your life: Use properties instead of constants Having a closer look at the BeforeDraw() method might wet your appetite to simplify the code a little bit. Looking at the sample screenshots in Bernard's article you see several forms in different colors. This got me to modify the code like so: *====================================================================* Apply the actual lightbox effect on the captured bitmap.*====================================================================If Vartype(This.CaptureBmp) == T_OBJECT Local loGfx As xfcGraphics loGfx = This.oGfx With _Screen.System.Drawing loGfx.DrawImage(This.CaptureBmp,This.Rectangle,This.Rectangle,.GraphicsUnit.Pixel) * change the colours as needed here * possible colours are (220,128,0,0),(220,0,0,128) etc. loBrush = .SolidBrush.New(.Color.FromArgb( ; This.Opacity, .Color.FromRGB(This.BorderColor))) loGfx.FillRectangle(loBrush,This.Rectangle) EndwithEndif Create an additional property Opacity to specify the grade of translucency you would like to have without the need to change the code in each instance of the class. This way you only need to change the values of Opacity and BorderColor to tweak the appearance of your lightbox. This could be quite helpful to signalize different levels of importance (ie. green, yellow, orange, red, etc...) of notifications to the users of the application. Final thoughts Using the lightbox concept in combination with _Screen instead of forms is possible. Already Jim Wiggins comments in Bernard's article to loop through the _Screen.Forms collection in order to cascade the lightbox visibility to all active forms. Good idea. But honestly, I believe that instead of looping all forms one could use _Screen.SetAll("ShowLightbox", .T./.F., "Form") with Form.ShowLightbox_Access method to gain more speed. The modifications described above might provide even more features to your applications while consuming less resources and performance. Additionally, the restrictions to capture only forms does not exist anymore. Using _Screen you are able to capture and cover anything. The captured area of _Screen does not include any toolbars, docked windows, or menus. Therefore, it is advised to take this concept on a higher level and to combine it with additional classes that handle the state of toolbars, docked windows and menus. Which I did for the customer's project.

    Read the article

  • Documenting C# Library using GhostDoc and SandCastle

    - by sreejukg
    Documentation is an essential part of any IT project, especially when you are creating reusable components that will be used by other developers (such as class libraries). Without documentation re-using a class library is almost impossible. Just think of coding .net applications without MSDN documentation (Ooops I can’t think of it). Normally developers, who know the bits and pieces of their classes, see this as a boring work to write details again to generate the documentation. Also the amount of work to make this and manage it changes made the process of manual creation of Documentation impossible or tedious. So what is the effective solution? Let me divide this into two steps 1. Generate comments for your code while you are writing the code. 2. Create documentation file using these comments. Now I am going to examine these processes. Step 1: Generate XML Comments automatically Most of the developers write comments for their code. The best thing is that the comments will be entered during the development process. Additionally comments give a good reference to the code, make your code more manageable/readable. Later these comments can be converted into documentation, along with your source code by identifying properties and methods I found an add-in for visual studio, GhostDoc that automatically generates XML documentation comments for C#. The add-in is available in Visual Studio Gallery at MSDN. You can download this from the url http://visualstudiogallery.msdn.microsoft.com/en-us/46A20578-F0D5-4B1E-B55D-F001A6345748. I downloaded the free version from the above url. The free version suits my requirement. There is a professional version (you need to pay some $ for this) available that gives you some more features. I found the free version itself suits my requirements. The installation process is straight forward. A couple of clicks will do the work for you. The best thing with GhostDoc is that it supports multiple versions of visual studio such as 2005, 2008 and 2010. After Installing GhostDoc, when you start Visual studio, the GhostDoc configuration dialog will appear. The first screen asks you to assign a hot key, pressing this hotkey will enter the comment to your code file with the necessary structure required by GhostDoc. Click Assign to go to the next step where you configure the rules for generating the documentation from the code file. Click Create to start creating the rules. Click finish button to close this wizard. Now you performed the necessary configuration required by GhostDoc. Now In Visual Studio tools menu you can find the GhostDoc that gives you some options. Now let us examine how GhostDoc generate comments for a method. I have write the below code in my code behind file. public Char GetChar(string str, int pos) { return str[pos]; } Now I need to generate the comments for this function. Select the function and enter the hot key assigned during the configuration. GhostDoc will generate the comments as follows. /// <summary> /// Gets the char. /// </summary> /// <param name="str">The STR.</param> /// <param name="pos">The pos.</param> /// <returns></returns> public Char GetChar(string str, int pos) { return str[pos]; } So this is a very handy tool that helps developers writing comments easily. You can generate the xml documentation file separately while compiling the project. This will be done by the C# compiler. You can enable the xml documentation creation option (checkbox) under Project properties -> Build tab. Now when you compile, the xml file will created under the bin folder. Step 2: Generate the documentation from the XML file Now you have generated the xml file documentation. Sandcastle is the tool from Microsoft that generates MSDN style documentation from the compiler produced XML file. The project is available in codeplex http://sandcastle.codeplex.com/. Download and install Sandcastle to your computer. Sandcastle is a command line tool that doesn’t have a rich GUI. If you want to automate the documentation generation, definitely you will be using the command line tools. Since I want to generate the documentation from the xml file generated in the previous step, I was expecting a GUI where I can see the options. There is a GUI available for Sandcastle called Sandcastle Help File Builder. See the link to the project in codeplex. http://www.codeplex.com/wikipage?ProjectName=SHFB. You need to install Sandcastle and then the Sandcastle Help file builder. From here I assume that you have installed both sandcastle and Sandcastle help file builder successfully. Once you installed the help file builder, it will be available in your all programs list. Click on the Sandcastle Help File Builder GUI, will launch application. First you need to create a project. Click on File -> New project The New project dialog will appear. Choose a folder to store your project file and give a name for your documentation project. Click the save button. Now you will see your project properties. Now from the Project explorer, right click on the Documentation Sources, Click on the Add Documentation Source link. A documentation source is a file such as an assembly or a Visual Studio solution or project from which information will be extracted to produce API documentation. From the Add Documentation source dialog, I have selected the XML file generated by my project. Once you add the xml file to the project, you will see the dll file automatically added by the help file builder. Now click on the build button. Now the application will generate the help file. The Build window gives to the result of each steps. Once the process completed successfully, you will have the following output in the build window. Now navigate to your Help Project (I have selected the folder My Documents\Documentation), inside help folder, you can find the chm file. Open the chm file will give you MSDN like documentation. Documentation is an important part of development life cycle. Sandcastle with GhostDoc make this process easier so that developers can implement the documentation in the projects with simple to use steps.

    Read the article

  • Handling HumanTask attachments in Oracle BPM 11g PS4FP+ (I)

    - by ccasares
    Adding attachments to a HumanTask is a feature that exists in Oracle HWF (Human Workflow) since 10g. However, in 11g there have been many improvements on this feature and this entry will try to summarize them. Oracle BPM 11g 11.1.1.5.1 (aka PS4 Feature Pack or PS4FP) introduced two great features: Ability to link attachments at a Task scope or at a Process scope: "Task" attachments are only visible within the scope (lifetime) of a task. This means that, initially, any member of the assignment pattern of the Human Task will be able to handle (add, review or remove) attachments. However, once the task is completed, subsequent human tasks will not have access to them. This does not mean those attachments got lost. Once the human task is completed, attachments can be retrieved in order to, i.e., check them in to a Content Server or to inject them to a new and different human task. Aside note: a "re-initiated" human task will inherit comments and attachments, along with history and -optionally- payload. See here for more info. "Process" attachments are visible within the scope of the process. This means that subsequent human tasks in the same process instance will have access to them. Ability to use Oracle WebCenter Content (previously known as "Oracle UCM") as the backend for the attachments instead of using HWF database backend. This feature adds all content server document lifecycle capabilities to HWF attachments (versioning, RBAC, metadata management, etc). As of today, only Oracle WCC is supported. However, Oracle BPM Suite does include a license of Oracle WCC for the solely usage of document management within BPM scope. Here are some code samples that leverage the above features. Retrieving uploaded attachments -Non UCM- Non UCM attachments (default ones or those that have existed from 10g, and are stored "as-is" in HWK database backend) can be retrieved after the completion of the Human Task. Firstly, we need to know whether any attachment has been effectively uploaded to the human task. There are two ways to find it out: Through an XPath function: Checking the execData/attachment[] structure. For example: Once we are sure one ore more attachments were uploaded to the Human Task, we want to get them. In this example, by "get" I mean to get the attachment name and the payload of the file. Aside note: Oracle HWF lets you to upload two kind of [non-UCM] attachments: a desktop document and a Web URL. This example focuses just on the desktop document one. In order to "retrieve" an uploaded Web URL, you can get it directly from the execData/attachment[] structure. Attachment content (payload) is retrieved through the getTaskAttachmentContents() XPath function: This example shows how to retrieve as many attachments as those had been uploaded to the Human Task and write them to the server using the File Adapter service. The sample process excerpt is as follows:  A dummy UserTask using "HumanTask1" Human Task followed by a Embedded Subprocess that will retrieve the attachments (we're assuming at least one attachment is uploaded): and once retrieved, we will write each of them back to a file in the server using a File Adapter service: In detail: We've defined an XSD structure that will hold the attachments (both name and payload): Then, we can create a BusinessObject based on such element (attachmentCollection) and create a variable (named attachmentBPM) of such BusinessObject type. We will also need to keep a copy of the HumanTask output's execData structure. Therefore we need to create a variable of type TaskExecutionData... ...and copy the HumanTask output execData to it: Now we get into the embedded subprocess that will retrieve the attachments' payload. First, and using an XSLT transformation, we feed the attachmentBPM variable with the name of each attachment and setting an empty value to the payload: Please note that we're using the XSLT for-each node to create as many target structures as necessary. Also note that we're setting an Empty text to the payload variable. The reason for this is to make sure the <payload></payload> tag gets created. This is needed when we map the payload to the XML variable later. Aside note: We are assuming that we're retrieving non-UCM attachments. However in real life you might want to check the type of attachment you're handling. The execData/attachment[]/storageType contains the values "UCM" for UCM type attachments, "TASK" for non-UCM ones or "URL" for Web URL ones. Those values are part of the "Ext.Com.Oracle.Xmlns.Bpel.Workflow.Task.StorageTypeEnum" enumeration. Once we have fed the attachmentsBPM structure and so it now contains the name of each of the attachments, it is time to iterate through it and get the payload. Therefore we will use a new embedded subprocess of type MultiInstance, that will iterate over the attachmentsBPM/attachment[] element: In every iteration we will use a Script activity to map the corresponding payload element with the result of the XPath function getTaskAttachmentContents(). Please, note how the target array element is indexed with the loopCounter predefined variable, so that we make sure we're feeding the right element during the array iteration:  The XPath function used looks as follows: hwf:getTaskAttachmentContents(bpmn:getDataObject('UserTask1LocalExecData')/ns1:systemAttributes/ns1:taskId, bpmn:getDataObject('attachmentsBPM')/ns:attachment[bpmn:getActivityInstanceAttribute('SUBPROCESS3067107484296', 'loopCounter')]/ns:fileName)  where the input parameters are: taskId of the just completed Human Task attachment name we're retrieving the payload from array index (loopCounter predefined variable)  Aside note: The reason whereby we're iterating the execData/attachment[] structure through embedded subprocess and not, i.e., using XSLT and for-each nodes, is mostly because the getTaskAttachmentContents() XPath function is currently not available in XSLT mappings. So all this example might be considered as a workaround until this gets fixed/enhanced in future releases. Once this embedded subprocess ends, we will have all attachments (name + payload) in the attachmentsBPM variable, which is the main goal of this sample. But in order to test everything runs fine, we finish the sample writing each attachment to a file. To that end we include a final embedded subprocess to concurrently iterate through each attachmentsBPM/attachment[] element: On each iteration we will use a Service activity that invokes a File Adapter write service. In here we have two important parameters to set. First, the payload itself. The file adapter awaits binary data in base64 format (string). We have to map it using XPath (Simple mapping doesn't recognize a String as a base64-binary valid target):  Second, we must set the target filename using the Service Properties dialog box:  Again, note how we're making use of the loopCounter index variable to get the right element within the embedded subprocess iteration. Handling UCM attachments will be part of a different and upcoming blog entry. Once I finish will all posts on this matter, I will upload the whole sample project to java.net.

    Read the article

  • User Guide to Dropbox Shared Folders

    - by Matthew Guay
    Dropbox is an incredibly useful tool for keeping all your files synced between your computers and the cloud.  Here we’re going to look at how you can keep all of your team on the same page with Dropbox shared folders. Creating a Shared Folder Setting up a shared folder in Dropbox is easy.  Add the files you want to share to a folder in Dropbox on your computer, then right-click in the folder, select Dropbox, and then choose Share This Folder.   Alternately, log into your Dropbox account online, click the drop-down menu beside the folder you want to share, and click Share this folder. Now, enter the email addresses of the people you want to share the folder with, and optionally enter a message explaining why you’re sharing the folder. The people you invite will receive an email inviting them to view and join the shared folder.  If they haven’t signed up for Dropbox, they can directly signup; otherwise, they can simply log into their Dropbox account and start adding or editing files. Shared folders have a slightly different icon in your Dropbox.  Notice the shared folder on the left has an icon with 2 people, while the folder on the right that is not shared, shows previews of its contents. See Your Shared Folder’s History Whenever your collaborators with your shared folders add or change files, you will see a tooltip notification telling you what changed. You can also view the changes online.  Log into your Dropbox account in your browser and select the Events tab.  This shows all changes to your Dropbox, but you can view only the changes in your shared folder by selecting its name on the left sidebar. Now you can see all recent changes to your folder, and can also see who added or removed each file.   On the bottom of the page, you can even add a comment that all the collaborators will see. If someone deleted a file you still need, you can restore it by clicking its link in this online history.  Or, you can view any deleted files by right-clicking in your Dropbox folder in Explorer.  Select Dropbox, and then click Show Deleted Files.   Get Notified When a Change is Made You’re not always in front of your computer; you’ve got a life beyond your projects, after all (at least hopefully).  If you really want to stay connected to what’s happening with your project, though, you can easily do that no matter where you are. Your shared Dropbox folder’s history page offers an RSS feed of all changes to the folder.  Click  the Subscribe to this feed hyperlink. Now, in the popup that opens, click “Copy to clipboard” so you can use this RSS feed. You can subscribe to RSS feeds through many web browsers, email clients, dedicated feed readers, and more.  In Firefox, Internet Explorer 7/8, or Opera, you can paste the feed address into your address bar and subscribe to the feed directly in your browser.   However, subscribing to the feed in a desktop application won’t help you much when you’re away from your computer.  One great option is to subscribe in the popular Google Reader.  Then you can check your feed from any browser, on any computer or mobile device. To add your Dropbox feed to Google Reader, log into Google Reader (link below), click Add a subscription on the top left, paste your RSS feed from Dropbox, and click Add.   Now you can see any changes to files or folders in Google Reader. You can even add your feed to your iGoogle homepage.  Click the Add it Now button on the right in the front page of Google Reader to add your feeds to iGoogle.   Now you can see updates on your files from your homepage.  If you’re using a different computer, just login to your Google account to see what’s happening. You can also access your Google Reader feeds from many programs and apps for most major Smartphones including iPhone, Windows Phone, and Blackberry. Receive a Tweet or Text When Changes are Made If you’re a hyper-connected individual, chances are you send and receive tweets on the go.  If so, this might be the best way for you to get notified when changes are made to your Dropbox shared folder.  To do this, first create a new Twitter account to publish your changes through.  If you don’t want the whole world to see your updates, click Settings and set your new Twitter account to Private. Once the new account is created, follow it with your normal Twitter account so you’ll see updates. Now, let’s publish our Dropbox RSS feed to Twitter.  Create an account with Twitterfeed (link below). Once your account is setup, add your feed to it.  Name your feed, and enter your Feed address from Dropbox.  Click Advanced Settings to make your feed work just like you want. In Advanced Settings, change the frequency to “Every 30 mins” to make sure you’re updated on changes as quick as possible.  You can also change other settings if you like. Click “Continue to Step 2”, and then click Twitter under the available services to add your account. Make sure your signed into your new Twitter account, and then click Authenticate Twitter. Allow the application. Now, finally, click Create Service. Whenever a change is made, you will receive a tweet via your new Twitter account.  And since you can receive tweets via text message or many mobile applications, you’ll never be very far away from your Dropbox changes!   Conclusion Dropbox shared folders are a great way to keep your whole team working together on the same files in a project.  And with these handy tricks, you can keep up with your shared files wherever you are! There are a lot of cool things you can do with Dropbox make sure to check out our posts on adding Dropbox to the Windows 7 Start menu, Accessing Dropbox files from Chrome, and Syncing your Pidgin Profile Across Multiple PCs. Links Signup or access your Dropbox account Google Reader Tweet your feed with Twitterfeed Similar Articles Productive Geek Tips How to Add and Manage Shared Folders on Windows Home ServerManage User Accounts in Windows Home ServerAdd "My Dropbox" to Your Windows 7 Start MenuComplete Guide to Networking Windows 7 with XP and VistaMoving Your Personal Data Folders in Windows Vista the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #038

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 CASE Statement in ORDER BY Clause – ORDER BY using Variable This article is as per request from the Application Development Team Leader of my company. His team encountered code where the application was preparing string for ORDER BY clause of the SELECT statement. Application was passing this string as variable to Stored Procedure (SP) and SP was using EXEC to execute the SQL string. This is not good for performance as Stored Procedure has to recompile every time due to EXEC. sp_executesql can do the same task but still not the best performance. SSMS – View/Send Query Results to Text/Grid/Files Results to Text – CTRL + T Results to Grid – CTRL + D Results to File – CTRL + SHIFT + F 2008 Introduction to SPARSE Columns Part 2 I wrote about Introduction to SPARSE Columns Part 1. Let us understand the concept of the SPARSE column in more detail. I suggest you read the first part before continuing reading this article. All SPARSE columns are stored as one XML column in the database. Let us see some of the advantage and disadvantage of SPARSE column. Deferred Name Resolution How come when table name is incorrect SP can be created successfully but when an incorrect column is used SP cannot be created? 2009 Backup Timeline and Understanding of Database Restore Process in Full Recovery Model In general, databases backup in full recovery mode is taken in three different kinds of database files. Full Database Backup Differential Database Backup Log Backup Restore Sequence and Understanding NORECOVERY and RECOVERY While doing RESTORE Operation if you restoring database files, always use NORECOVER option as that will keep the database in a state where more backup file are restored. This will also keep database offline also to prevent any changes, which can create itegrity issues. Once all backup file is restored run RESTORE command with a RECOVERY option to get database online and operational. Four Different Ways to Find Recovery Model for Database Perhaps, the best thing about technical domain is that most of the things can be executed in more than one ways. It is always useful to know about the various methods of performing a single task. Two Methods to Retrieve List of Primary Keys and Foreign Keys of Database When Information Schema is used, we will not be able to discern between primary key and foreign key; we will have both the keys together. In the case of sys schema, we can query the data in our preferred way and can join this table to another table, which can retrieve additional data from the same. Get Last Running Query Based on SPID PID is returns sessions ID of the current user process. The acronym SPID comes from the name of its earlier version, Server Process ID. 2010 SELECT * FROM dual – Dual Equivalent Dual is a table that is created by Oracle together with data dictionary. It consists of exactly one column named “dummy”, and one record. The value of that record is X. You can check the content of the DUAL table using the following syntax. SELECT * FROM dual Identifying Statistics Used by Query Someone asked this question in my training class of query optimization and performance tuning.  “Can I know which statistics were used by my query?” 2011 SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 14 of 31 What are the basic functions for master, msdb, model, tempdb and resource databases? What is the Maximum Number of Index per Table? Explain Few of the New Features of SQL Server 2008 Management Studio Explain IntelliSense for Query Editing Explain MultiServer Query Explain Query Editor Regions Explain Object Explorer Enhancements Explain Activity Monitors SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 15 of 31 What is Service Broker? Where are SQL server Usernames and Passwords Stored in the SQL server? What is Policy Management? What is Database Mirroring? What are Sparse Columns? What does TOP Operator Do? What is CTE? What is MERGE Statement? What is Filtered Index? Which are the New Data Types Introduced in SQL SERVER 2008? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 16 of 31 What are the Advantages of Using CTE? How can we Rewrite Sub-Queries into Simple Select Statements or with Joins? What is CLR? What are Synonyms? What is LINQ? What are Isolation Levels? What is Use of EXCEPT Clause? What is XPath? What is NOLOCK? What is the Difference between Update Lock and Exclusive Lock? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 17 of 31 How will you Handle Error in SQL SERVER 2008? What is RAISEERROR? What is RAISEERROR? How to Rebuild the Master Database? What is the XML Datatype? What is Data Compression? What is Use of DBCC Commands? How to Copy the Tables, Schema and Views from one SQL Server to Another? How to Find Tables without Indexes? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 18 of 31 How to Copy Data from One Table to Another Table? What is Catalog Views? What is PIVOT and UNPIVOT? What is a Filestream? What is SQLCMD? What do you mean by TABLESAMPLE? What is ROW_NUMBER()? What are Ranking Functions? What is Change Data Capture (CDC) in SQL Server 2008? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 19 of 31 How can I Track the Changes or Identify the Latest Insert-Update-Delete from a Table? What is the CPU Pressure? How can I Get Data from a Database on Another Server? What is the Bookmark Lookup and RID Lookup? What is Difference between ROLLBACK IMMEDIATE and WITH NO_WAIT during ALTER DATABASE? What is Difference between GETDATE and SYSDATETIME in SQL Server 2008? How can I Check that whether Automatic Statistic Update is Enabled or not? How to Find Index Size for Each Index on Table? What is the Difference between Seek Predicate and Predicate? What are Basics of Policy Management? What are the Advantages of Policy Management? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 20 of 31 What are Policy Management Terms? What is the ‘FILLFACTOR’? Where in MS SQL Server is ’100’ equal to ‘0’? What are Points to Remember while Using the FILLFACTOR Argument? What is a ROLLUP Clause? What are Various Limitations of the Views? What is a Covered index? When I Delete any Data from a Table, does the SQL Server reduce the size of that table? What are Wait Types? How to Stop Log File Growing too Big? If any Stored Procedure is Encrypted, then can we see its definition in Activity Monitor? 2012 Example of Width Sensitive and Width Insensitive Collation Width Sensitive Collation: A single-byte character (half-width) represented as single-byte and the same character represented as a double-byte character (full-width) are when compared are not equal the collation is width sensitive. In this example we have one table with two columns. One column has a collation of width sensitive and the second column has a collation of width insensitive. Find Column Used in Stored Procedure – Search Stored Procedure for Column Name Very interesting conversation about how to find column used in a stored procedure. There are two different characters in the story and both are having a conversation about how to find column in the stored procedure. Here are two part story Part 1 | Part 2 SQL SERVER – 2012 Functions – FORMAT() and CONCAT() – An Interesting Usage Generate Script for Schema and Data – SQL in Sixty Seconds #021 – Video In simple words, in many cases the database move from one place to another place. It is not always possible to back up and restore databases. There are possibilities when only part of the database (with schema and data) has to be moved. In this video we learn that we can easily generate script for schema for data and move from one server to another one. INFORMATION_SCHEMA.COLUMNS and Value Character Maximum Length -1 I often see the value -1 in the CHARACTER_MAXIMUM_LENGTH column of INFORMATION_SCHEMA.COLUMNS table. I understand that the length of any column can be between 0 to large number but I do not get it when I see value in negative (i.e. -1). Any insight on this subject? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Tools of the Trade

    - by Ajarn Mark Caldwell
    I got pretty excited a couple of days ago when my new laptop arrived. “The new phone books are here!  The new phone books are here!  I’m a somebody!” - Steve Martin in The Jerk It is a Dell Precision M4500 with an Intel i7 Core 2.8 GHZ running 64-bit Windows 7 with a 15.6” widescreen, 8 GB RAM, 256 GB SSD.  For some of you high fliers, this may be nothing to write home about, but compared to the 32–bit Windows XP laptop with 2 GB of RAM and a regular hard disk that I’m coming from, it’s a really nice step forward.  I won’t even bore you with the details of the desktop PC I was first given when I started here 5 1/2 years ago.  Let’s just say that things have improved.  One really nice thing is that while we are definitely running a lean and mean department in terms of staffing, my boss believes in supporting that lean staff with good tools in order to stay lean instead of having to spend even more money on additional employees.  Of course, that only goes so far, and at some point you have to add more people in order to get more work done, which is why we are bringing on-board a new employee and a new contract developer next week.  But that’s a different story for a different time. But the main topic for this post is to highlight the variety of tools that I use in my job and that you might find useful, too.  This is easy to do right now because the process of building up my new laptop from scratch has forced me to assemble a list of software that had to be installed and configured.  Keep in mind as you look through this list that I play many roles in our company.  My official title is Software Engineering Manager, but in addition to managing the team, I am also an active ASP.NET and SQL developer, the Database Administrator, and 50% of the SAN Administrator team.  So, without further ado, here are the tools and some comments about why I use them: Tool Purpose Virtual Clone Drive Easily mount an ISO image as a DVD Drive.  This is particularly handy when you are downloading disk images from Microsoft for your tools. SQL Server 2008 R2 Developer Edition We are migrating all of our active systems to SQL 2008 R2.  Developer Edition has all the features of Enterprise Edition, but intended for development use. SQL Server 2005 Developer Edition (BIDS ONLY) The migration to SSRS 2008 R2 is just getting started, and in the meantime, maintenance work still has to be done on the reports on our SQL 2005 server.  For some reason, you can’t use BIDS from 2008 to write reports for a 2005 server.  There is some different format and when you open 2005 reports in 2008 BIDS, it forces you to upgrade, and they can no longer be uploaded to a 2005 server.  Hopefully Microsoft will fix this soon in some manner similar to Visual Studio now allows you to pick which version of the .NET Framework you are coding against. Visual Studio 2010 Premium All of our application development is in ASP.NET, and we might as well use the tool designed for it. I’ve used a version of Visual Studio going all the way back to VB 6.0 and Visual Interdev. Vault Professional Client Several years ago we replaced Visual Source Safe with SourceGear Vault (then Fortress, and now Vault Pro), and I love it.  It is very reliable with low overhead - perfect for a small to medium size development team.  And being a small ISV, their support is exceptional. Red-Gate Developer Bundle with the SQL Source Control update for Vault I first used, and fell in love with, SQL Prompt shortly before Red-Gate bought it, and then Red-Gate’s first release made me love it even more.  SQL Refactor (which has since been rolled into the latest version of SQL Prompt) has saved me many hours and migraine’s trying to understand somebody else’s code when their indenting was nonexistant, or worse, irrational.  SQL Compare has been awesome for troubleshooting potential schema issues between different instances of system databases.  SQL Data Compare helped us identify the cause behind a bug which appeared in PROD but could not be reproduced in a nearly (but not quite exactly) identical copy in UAT.  And the newest tool we are embracing: SQL Source Control.  I blogged about it here (and here, and here) last December.  This is really going to help us keep each developer’s copy of the database in sync with one another. Fiddler Helps you watch the whole traffic stream on web visits.  Haven’t used it a lot, but it did help me track down some odd 404 errors we were finding in our own application logs.  Has some other JavaScript troubleshooting capabilities, but some of its usefulness has been supplanted by the Developer Tools option in IE8. Funduc Search & Replace Find any string anywhere in a mound of source code really, really fast.  Does RegEx searches, if you understand that foreign language.  Has really helped with some refactoring work to pinpoint, for example, everywhere a particular stored procedure is referenced, whether in .NET code or other SQL procedures (which we have in script files).  Provides in-context preview of the search results.  Fantastic tool, and a bargain price. SciTE SciTE is a Scintilla based Text Editor and it is a fantastic, light-weight tool for quickly reviewing (or writing) program code, SQL scripts, and extract files.  It has language-specific syntax highlighting.  I used it to write several batch and CMD programs a year ago, and to examine data extract files for exchanging information with other systems.  Extremely handy are the options to View End of Line and View Whitespace.  Ever receive a file that is supposed to use CRLF as an end-of-line marker, but really only has CRs?  SciTE will quickly make that visible. Infragistics Controls We do a lot of ASP.NET development, and frequently use the WebGrid, WebTab, and date picker controls.  We will likely be implementing the Hierarchical Data Grid soon.  Infragistics has control suites for WebForms, WinForms, Silverlight, and coming soon MVC/JQuery. WinZip - WITH Command-Line add-in The classic compression program with a great command-line interface that allows me to build those CMD (and soon PowerShell) programs for automated compression jobs.  Our versioned Build packages are zip files. XML Notepad Haven’t used this a lot myself, but one of my team really likes it for examining large XML files. LINQPad Again, haven’t used this one a lot, but it was recommended to me for learning and practicing my LINQ skills which will come in handy as we implement Entity Framework. SQL Sentry Plan Explorer SQL Server Show Plan on steroids.  Great for helping you focus on the parts of a large query that are of most importance.  Also great for just compressing the graphical plan into more readable layout. Araxis Merge A great DIFF and Merge tool.  SourceGear provides a great tool called DiffMerge that we use all the time, but occasionally, I like the cross-edit capabilities of Araxis Merge.  For a while, we also produced DIFF reports in HTML that showed all the changes that occurred between two releases.  This was most important when we were putting out very small, but very important hot fixes on a very politically hot system.  The reports produced by Araxis Merge gave the Director of IS assurance that we were not accidentally introducing ripples throughout the system with our releases. Idera SQL Admin Toolset A great collection of tools including a password checker to help analyze your SQL Server for weak user passwords, a Backup Status tool to quickly scan a large list of servers and databases to identify any that are overdue for backups.  Particularly helpful for highlighting new databases that have been deployed without getting included in your backup processing.  I also like Space Analyzer to keep an eye on disk space consumed by database files. Idera SQL Job Manager This free tool provides a nice calendar view of SQL Server Job Schedules, but to a degree, you also get what you pay for.  We will be purchasing SQL Sentry Event Manager later this year as an even better job schedule reviewer/manager.  But in the meantime, this at least gives me a good view on potential resource conflicts across multiple instances of SQL Server. DBFViewer 2000 I inherited a couple of FoxPro databases that I have to keep an eye on occasionally and have not yet been able to migrate them to SQL Server. Balsamiq Mockups We are still in evaluation-mode on this tool, but I really like it as a quick UI mockup tool that does not require Visual Studio, so someone other than a programmer can do UI design.  The interface looks hand-drawn which definitely has some psychological benefits when communicating to users, too. FeedDemon I have to stay on top of my WAY TOO MANY blog subscriptions somehow.  I may read blogs on a couple of different computers, and FeedDemon’s integration with Google Reader allows me to keep them all in sync.  I don’t particularly like the Google Reader interface, or the fact that it always wanted to mark articles as read just because I scrolled past them.  FeedDemon solves this problem for me, and provides a multi-tabbed interface which is good because fairly frequently one blog will link to something else I want to read, and I can end up with a half-dozen open tabs all from one article. Synergy+ In my office, I run four monitors across two computers all with one mouse and keyboard.  Synergy is the magic software that makes this work. TweetDeck I’m not the most active Tweeter in the world, but when I want to check-in with the Twitterverse, this really helps.  I have found the #sqlhelp and #PoshHelp hash tags particularly useful, and I also have columns setup to make it easy to monitor #sqlpass, #PASSProfDev, and short term events like #sqlsat68.   Whew!  That’s a lot.  No wonder it took me a couple of days to get everything setup the way I wanted it.  Oh, that and actually getting some work accomplished at the same time.  Anyway, I know that is a huge dump of info, and most people never make it here to the end, so for those who did, let me say, CONGRATULATIONS, you made it! I hope you’ll find a new tool or two to make your work life a little easier.

    Read the article

  • Of C# Iterators and Performance

    - by James Michael Hare
    Some of you reading this will be wondering, "what is an iterator" and think I'm locked in the world of C++.  Nope, I'm talking C# iterators.  No, not enumerators, iterators.   So, for those of you who do not know what iterators are in C#, I will explain it in summary, and for those of you who know what iterators are but are curious of the performance impacts, I will explore that as well.   Iterators have been around for a bit now, and there are still a bunch of people who don't know what they are or what they do.  I don't know how many times at work I've had a code review on my code and have someone ask me, "what's that yield word do?"   Basically, this post came to me as I was writing some extension methods to extend IEnumerable<T> -- I'll post some of the fun ones in a later post.  Since I was filtering the resulting list down, I was using the standard C# iterator concept; but that got me wondering: what are the performance implications of using an iterator versus returning a new enumeration?   So, to begin, let's look at a couple of methods.  This is a new (albeit contrived) method called Every(...).  The goal of this method is to access and enumeration and return every nth item in the enumeration (including the first).  So Every(2) would return items 0, 2, 4, 6, etc.   Now, if you wanted to write this in the traditional way, you may come up with something like this:       public static IEnumerable<T> Every<T>(this IEnumerable<T> list, int interval)     {         List<T> newList = new List<T>();         int count = 0;           foreach (var i in list)         {             if ((count++ % interval) == 0)             {                 newList.Add(i);             }         }           return newList;     }     So basically this method takes any IEnumerable<T> and returns a new IEnumerable<T> that contains every nth item.  Pretty straight forward.   The problem?  Well, Every<T>(...) will construct a list containing every nth item whether or not you care.  What happens if you were searching this result for a certain item and find that item after five tries?  You would have generated the rest of the list for nothing.   Enter iterators.  This C# construct uses the yield keyword to effectively defer evaluation of the next item until it is asked for.  This can be very handy if the evaluation itself is expensive or if there's a fair chance you'll never want to fully evaluate a list.   We see this all the time in Linq, where many expressions are chained together to do complex processing on a list.  This would be very expensive if each of these expressions evaluated their entire possible result set on call.    Let's look at the same example function, this time using an iterator:       public static IEnumerable<T> Every<T>(this IEnumerable<T> list, int interval)     {         int count = 0;         foreach (var i in list)         {             if ((count++ % interval) == 0)             {                 yield return i;             }         }     }   Notice it does not create a new return value explicitly, the only evidence of a return is the "yield return" statement.  What this means is that when an item is requested from the enumeration, it will enter this method and evaluate until it either hits a yield return (in which case that item is returned) or until it exits the method or hits a yield break (in which case the iteration ends.   Behind the scenes, this is all done with a class that the CLR creates behind the scenes that keeps track of the state of the iteration, so that every time the next item is asked for, it finds that item and then updates the current position so it knows where to start at next time.   It doesn't seem like a big deal, does it?  But keep in mind the key point here: it only returns items as they are requested. Thus if there's a good chance you will only process a portion of the return list and/or if the evaluation of each item is expensive, an iterator may be of benefit.   This is especially true if you intend your methods to be chainable similar to the way Linq methods can be chained.    For example, perhaps you have a List<int> and you want to take every tenth one until you find one greater than 10.  We could write that as:       List<int> someList = new List<int>();         // fill list here         someList.Every(10).TakeWhile(i => i <= 10);     Now is the difference more apparent?  If we use the first form of Every that makes a copy of the list.  It's going to copy the entire list whether we will need those items or not, that can be costly!    With the iterator version, however, it will only take items from the list until it finds one that is > 10, at which point no further items in the list are evaluated.   So, sounds neat eh?  But what's the cost is what you're probably wondering.  So I ran some tests using the two forms of Every above on lists varying from 5 to 500,000 integers and tried various things.    Now, iteration isn't free.  If you are more likely than not to iterate the entire collection every time, iterator has some very slight overhead:   Copy vs Iterator on 100% of Collection (10,000 iterations) Collection Size Num Iterated Type Total ms 5 5 Copy 5 5 5 Iterator 5 50 50 Copy 28 50 50 Iterator 27 500 500 Copy 227 500 500 Iterator 247 5000 5000 Copy 2266 5000 5000 Iterator 2444 50,000 50,000 Copy 24,443 50,000 50,000 Iterator 24,719 500,000 500,000 Copy 250,024 500,000 500,000 Iterator 251,521   Notice that when iterating over the entire produced list, the times for the iterator are a little better for smaller lists, then getting just a slight bit worse for larger lists.  In reality, given the number of items and iterations, the result is near negligible, but just to show that iterators come at a price.  However, it should also be noted that the form of Every that returns a copy will have a left-over collection to garbage collect.   However, if we only partially evaluate less and less through the list, the savings start to show and make it well worth the overhead.  Let's look at what happens if you stop looking after 80% of the list:   Copy vs Iterator on 80% of Collection (10,000 iterations) Collection Size Num Iterated Type Total ms 5 4 Copy 5 5 4 Iterator 5 50 40 Copy 27 50 40 Iterator 23 500 400 Copy 215 500 400 Iterator 200 5000 4000 Copy 2099 5000 4000 Iterator 1962 50,000 40,000 Copy 22,385 50,000 40,000 Iterator 19,599 500,000 400,000 Copy 236,427 500,000 400,000 Iterator 196,010       Notice that the iterator form is now operating quite a bit faster.  But the savings really add up if you stop on average at 50% (which most searches would typically do):     Copy vs Iterator on 50% of Collection (10,000 iterations) Collection Size Num Iterated Type Total ms 5 2 Copy 5 5 2 Iterator 4 50 25 Copy 25 50 25 Iterator 16 500 250 Copy 188 500 250 Iterator 126 5000 2500 Copy 1854 5000 2500 Iterator 1226 50,000 25,000 Copy 19,839 50,000 25,000 Iterator 12,233 500,000 250,000 Copy 208,667 500,000 250,000 Iterator 122,336   Now we see that if we only expect to go on average 50% into the results, we tend to shave off around 40% of the time.  And this is only for one level deep.  If we are using this in a chain of query expressions it only adds to the savings.   So my recommendation?  If you have a resonable expectation that someone may only want to partially consume your enumerable result, I would always tend to favor an iterator.  The cost if they iterate the whole thing does not add much at all -- and if they consume only partially, you reap some really good performance gains.   Next time I'll discuss some of my favorite extensions I've created to make development life a little easier and maintainability a little better.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #035

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Row Overflow Data Explanation  In SQL Server 2005 one table row can contain more than one varchar(8000) fields. One more thing, the exclusions has exclusions also the limit of each individual column max width of 8000 bytes does not apply to varchar(max), nvarchar(max), varbinary(max), text, image or xml data type columns. Comparison Index Fragmentation, Index De-Fragmentation, Index Rebuild – SQL SERVER 2000 and SQL SERVER 2005 An old but like a gold article. Talks about lots of concepts related to Index and the difference from earlier version to the newer version. I strongly suggest that everyone should read this article just to understand how SQL Server has moved forward with the technology. Improvements in TempDB SQL Server 2005 had come up with quite a lots of improvements and this blog post describes them and explains the same. If you ask me what is my the most favorite article from early career. I must point out to this article as when I wrote this one I personally have learned a lot of new things. Recompile All The Stored Procedure on Specific TableI prefer to recompile all the stored procedure on the table, which has faced mass insert or update. sp_recompiles marks stored procedures to recompile when they execute next time. This blog post explains the same with the help of a script.  2008 SQLAuthority Download – SQL Server Cheatsheet You can download and print this cheat sheet and use it for your personal reference. If you have any suggestions, please let me know and I will see if I can update this SQL Server cheat sheet. Difference Between DBMS and RDBMS What is the difference between DBMS and RDBMS? DBMS – Data Base Management System RDBMS – Relational Data Base Management System or Relational DBMS High Availability – Hot Add Memory Hot Add CPU and Hot Add Memory are extremely interesting features of the SQL Server, however, personally I have not witness them heavily used. These features also have few restriction as well. I blogged about them in detail. 2009 Delete Duplicate Rows I have demonstrated in this blog post how one can identify and delete duplicate rows. Interesting Observation of Logon Trigger On All Servers – Solution The question I put forth in my previous article was – In single login why the trigger fires multiple times; it should be fired only once. I received numerous answers in thread as well as in my MVP private news group. Now, let us discuss the answer for the same. The answer is – It happens because multiple SQL Server services are running as well as intellisense is turned on. Blog post demonstrates how we can do the same with the help of SQL scripts. Management Studio New Features I have selected my favorite 5 features and blogged about it. IntelliSense for Query Editing Multi Server Query Query Editor Regions Object Explorer Enhancements Activity Monitors Maximum Number of Index per Table One of the questions I asked in my user group was – What is the maximum number of Index per table? I received lots of answers to this question but only two answers are correct. Let us now take a look at them in this blog post. 2010 Default Statistics on Column – Automatic Statistics on Column The truth is, Statistics can be in a table even though there is no Index in it. If you have the auto- create and/or auto-update Statistics feature turned on for SQL Server database, Statistics will be automatically created on the Column based on a few conditions. Please read my previously posted article, SQL SERVER – When are Statistics Updated – What triggers Statistics to Update, for the specific conditions when Statistics is updated. 2011 T-SQL Scripts to Find Maximum between Two Numbers In this blog post there are two different scripts listed which demonstrates way to find the maximum number between two numbers. I need your help, which one of the script do you think is the most accurate way to find maximum number? Find Details for Statistics of Whole Database – DMV – T-SQL Script I was recently asked is there a single script which can provide all the necessary details about statistics for any database. This question made me write following script. I was initially planning to use sp_helpstats command but I remembered that this is marked to be deprecated in future. 2012 Introduction to Function SIGN SIGN Function is very fundamental function. It will return the value 1, -1 or 0. If your value is negative it will return you negative -1 and if it is positive it will return you positive +1. Let us start with a simple small example. Template Browser – A Very Important and Useful Feature of SSMS Templates are like a quick cheat sheet or quick reference. Templates are available to create objects like databases, tables, views, indexes, stored procedures, triggers, statistics, and functions. Templates are also available for Analysis Services as well. The template scripts contain parameters to help you customize the code. You can Replace Template Parameters dialog box to insert values into the script. An invalid floating point operation occurred If you run any of the above functions they will give you an error related to invalid floating point. Honestly there is no workaround except passing the function appropriate values. SQRT of a negative number will give you result in real numbers which is not supported at this point of time as well LOG of a negative number is not possible (because logarithm is the inverse function of an exponential function and the exponential function is NEVER negative). Validating Spatial Object with IsValidDetailed Function SQL Server 2012 has introduced the new function IsValidDetailed(). This function has made my life very easy. In simple words, this function will check if the spatial object passed is valid or not. If it is valid it will give information that it is valid. If the spatial object is not valid it will return the answer that it is not valid and the reason for the same. This makes it very easy to debug the issue and make the necessary correction. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Thinking about Deprecated, Discontinued Features and Breaking Changes while Upgrading to SQL Server 2012 – Guest Post by Nakul Vachhrajani

    - by pinaldave
    Nakul Vachhrajani is a Technical Specialist and systems development professional with iGATE having a total IT experience of more than 7 years. Nakul is an active blogger with BeyondRelational.com (150+ blogs), and can also be found on forums at SQLServerCentral and BeyondRelational.com. Nakul has also been a guest columnist for SQLAuthority.com and SQLServerCentral.com. Nakul presented a webcast on the “Underappreciated Features of Microsoft SQL Server” at the Microsoft Virtual Tech Days Exclusive Webcast series (May 02-06, 2011) on May 06, 2011. He is also the author of a research paper on Database upgrade methodologies, which was published in a CSI journal, published nationwide. In addition to his passion about SQL Server, Nakul also contributes to the academia out of personal interest. He visits various colleges and universities as an external faculty to judge project activities being carried out by the students. Disclaimer: The opinions expressed herein are his own personal opinions and do not represent his employer’s view in anyway. Blog | LinkedIn | Twitter | Google+ Let us hear the thoughts of Nakul in first person - Those who have been following my blogs would be aware that I am recently running a series on the database engine features that have been deprecated in Microsoft SQL Server 2012. Based on the response that I have received, I was quite surprised to know that most of the audience found these to be breaking changes, when in fact, they were not! It was then that I decided to write a little piece on how to plan your database upgrade such that it works with the next version of Microsoft SQL Server. Please note that the recommendations made in this article are high-level markers and are intended to help you think over the specific steps that you would need to take to upgrade your database. Refer the documentation – Understand the terms Change is the only constant in this world. Therefore, whenever customer requirements, newer architectures and designs require software vendors to make a change to the keywords, functions, etc; they ensure that they provide their end users sufficient time to migrate over to the new standards before dropping off the old ones. Microsoft does that too with it’s Microsoft SQL Server product. Whenever a new SQL Server release is announced, it comes with a list of the following features: Breaking changes These are changes that would break your currently running applications, scripts or functionalities that are based on earlier version of Microsoft SQL Server These are mostly features whose behavior has been changed keeping in mind the newer architectures and designs Lesson: These are the changes that you need to be most worried about! Discontinued features These features are no longer available in the associated version of Microsoft SQL Server These features used to be “deprecated” in the prior release Lesson: Without these changes, your database would not be compliant/may not work with the version of Microsoft SQL Server under consideration Deprecated features These features are those that are still available in the current version of Microsoft SQL Server, but are scheduled for removal in a future version. These may be removed in either the next version or any other future version of Microsoft SQL Server The features listed for deprecation will compose the list of discontinued features in the next version of SQL Server Lesson: Plan to make necessary changes required to remove/replace usage of the deprecated features with the latest recommended replacements Once a feature appears on the list, it moves from bottom to the top, i.e. it is first marked as “Deprecated” and then “Discontinued”. We know of “Breaking change” comes later on in the product life cycle. What this means is that if you want to know what features would not work with SQL Server 2012 (and you are currently using SQL Server 2008 R2), you need to refer the list of breaking changes and discontinued features in SQL Server 2012. Use the tools! There are a lot of tools and technologies around us, but it is rarely that I find teams using these tools religiously and to the best of their potential. Below are the top two tools, from Microsoft, that I use every time I plan a database upgrade. The SQL Server Upgrade Advisor Ever since SQL Server 2005 was announced, Microsoft provides a small, very light-weight tool called the “SQL Server upgrade advisor”. The upgrade advisor analyzes installed components from earlier versions of SQL Server, and then generates a report that identifies issues to fix either before or after you upgrade. The analysis examines objects that can be accessed, such as scripts, stored procedures, triggers, and trace files. Upgrade Advisor cannot analyze desktop applications or encrypted stored procedures. Refer the links towards the end of the post to know how to get the Upgrade Advisor. The SQL Server Profiler Another great tool that you can use is the one most SQL Server developers & administrators use often – the SQL Server profiler. SQL Server Profiler provides functionality to monitor the “Deprecation” event, which contains: Deprecation announcement – equivalent to features to be deprecated in a future release of SQL Server Deprecation final support – equivalent to features to be deprecated in the next release of SQL Server You can learn more using the links towards the end of the post. A basic checklist There are a lot of finer points that need to be taken care of when upgrading your database. But, it would be worth-while to identify a few basic steps in order to make your database compliant with the next version of SQL Server: Monitor the current application workload (on a test bed) via the Profiler in order to identify usage of features marked as Deprecated If none appear, you are all set! (This almost never happens) Note down all the offending queries and feature usages Run analysis sessions using the SQL Server upgrade advisor on your database Based on the inputs from the analysis report and Profiler trace sessions, Incorporate solutions for the breaking changes first Next, incorporate solutions for the discontinued features Revisit and document the upgrade strategy for your deployment scenarios Revisit the fall-back, i.e. rollback strategies in case the upgrades fail Because some programming changes are dependent upon the SQL server version, this may need to be done in consultation with the development teams Before any other enhancements are incorporated by the development team, send out the database changes into QA QA strategy should involve a comparison between an environment running the old version of SQL Server against the new one Because minimal application changes have gone in (essential changes for SQL Server version compliance only), this would be possible As an ongoing activity, keep incorporating changes recommended as per the deprecated features list As a DBA, update your coding standards to ensure that the developers are using ANSI compliant code – this code will require a change only if the ANSI standard changes Remember this: Change management is a continuous process. Keep revisiting the product release notes and incorporate recommended changes to stay prepared for the next release of SQL Server. May the power of SQL Server be with you! Links Referenced in this post Breaking changes in SQL Server 2012: Link Discontinued features in SQL Server 2012: Link Get the upgrade advisor from the Microsoft Download Center at: Link Upgrade Advisor page on MSDN: Link Profiler: Review T-SQL code to identify objects no longer supported by Microsoft: Link Upgrading to SQL Server 2012 by Vinod Kumar: Link Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Upgrade

    Read the article

  • WPA2 authentication fails on Ubuntu 12.04 using Rosewill RNX-N1

    - by user94156
    Decided to reduce the clutter in the house and replace a wired connection with a wireless one on my wife's system using USB network device Rosewill RNX-X1. I can see and connect to unprotected network, but WPA2 authentication repeatedly fails. RNX-X1 works on other systems (including TV); also have 2 of 'em and tried each. Worth noting that I recently switched from Comcast to CenturyLink and so switched routers. The system connected successfully to previous router (Linksys EA4500) using WPA2. Would think it is the router (Actiontec C1000A) but all other devices (TV, iPad, Windows, Blackberry, and Squeezebox) connect ok. Would appreciate some diagnostic guidance and insight (phrased for a newbie!) Tests to date: sudo lshw -class network *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 01 serial: 00:e0:4d:30:40:a1 size: 10Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half firmware=N/A latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:47 ioport:ac00(size=256) memory:fdcff000-fdcfffff memory:fdb00000-fdb1ffff *-network description: Wireless interface physical id: 1 bus info: usb@1:2 logical name: wlan1 serial: 00:02:6f:bd:30:a0 capabilities: ethernet physical wireless configuration: broadcast=yes driver=rt2800usb driverversion=3.2.0-31-generic firmware=0.29 link=no multicast=yes wireless=IEEE 802.11bgn sudo lspci -v 00:00.0 RAM memory: NVIDIA Corporation MCP67 Memory Controller (rev a2) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0 Capabilities: [44] HyperTransport: Slave or Primary Interface Capabilities: [dc] HyperTransport: MSI Mapping Enable+ Fixed- 00:01.0 ISA bridge: NVIDIA Corporation MCP67 ISA Bridge (rev a2) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0 00:01.1 SMBus: NVIDIA Corporation MCP67 SMBus (rev a2) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: 66MHz, fast devsel, IRQ 11 I/O ports at fc00 [size=64] I/O ports at 1c00 [size=64] I/O ports at 1c40 [size=64] Capabilities: [44] Power Management version 2 Kernel driver in use: nForce2_smbus Kernel modules: i2c-nforce2 00:01.2 RAM memory: NVIDIA Corporation MCP67 Memory Controller (rev a2) Flags: 66MHz, fast devsel 00:02.0 USB controller: NVIDIA Corporation MCP67 OHCI USB 1.1 Controller (rev a2) (prog-if 10 [OHCI]) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 23 Memory at fe02f000 (32-bit, non-prefetchable) [size=4K] Capabilities: [44] Power Management version 2 Kernel driver in use: ohci_hcd 00:02.1 USB controller: NVIDIA Corporation MCP67 EHCI USB 2.0 Controller (rev a2) (prog-if 20 [EHCI]) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 22 Memory at fe02e000 (32-bit, non-prefetchable) [size=256] Capabilities: [44] Debug port: BAR=1 offset=0098 Capabilities: [80] Power Management version 2 Kernel driver in use: ehci_hcd 00:04.0 USB controller: NVIDIA Corporation MCP67 OHCI USB 1.1 Controller (rev a2) (prog-if 10 [OHCI]) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 21 Memory at fe02d000 (32-bit, non-prefetchable) [size=4K] Capabilities: [44] Power Management version 2 Kernel driver in use: ohci_hcd 00:04.1 USB controller: NVIDIA Corporation MCP67 EHCI USB 2.0 Controller (rev a2) (prog-if 20 [EHCI]) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 20 Memory at fe02c000 (32-bit, non-prefetchable) [size=256] Capabilities: [44] Debug port: BAR=1 offset=0098 Capabilities: [80] Power Management version 2 Kernel driver in use: ehci_hcd 00:06.0 IDE interface: NVIDIA Corporation MCP67 IDE Controller (rev a1) (prog-if 8a [Master SecP PriP]) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0 [virtual] Memory at 000001f0 (32-bit, non-prefetchable) [size=8] [virtual] Memory at 000003f0 (type 3, non-prefetchable) [size=1] [virtual] Memory at 00000170 (32-bit, non-prefetchable) [size=8] [virtual] Memory at 00000370 (type 3, non-prefetchable) [size=1] I/O ports at f000 [size=16] Capabilities: [44] Power Management version 2 Kernel driver in use: pata_amd Kernel modules: pata_amd 00:07.0 Audio device: NVIDIA Corporation MCP67 High Definition Audio (rev a1) Subsystem: Biostar Microtech Int'l Corp Device 820c Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 22 Memory at fe024000 (32-bit, non-prefetchable) [size=16K] Capabilities: [44] Power Management version 2 Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+ Capabilities: [6c] HyperTransport: MSI Mapping Enable- Fixed+ Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 00:08.0 PCI bridge: NVIDIA Corporation MCP67 PCI Bridge (rev a2) (prog-if 01 [Subtractive decode]) Flags: bus master, 66MHz, fast devsel, latency 0 Bus: primary=00, secondary=01, subordinate=01, sec-latency=32 I/O behind bridge: 0000c000-0000cfff Memory behind bridge: fdf00000-fdffffff Prefetchable memory behind bridge: fd000000-fd0fffff Capabilities: [b8] Subsystem: NVIDIA Corporation Device cb84 Capabilities: [8c] HyperTransport: MSI Mapping Enable- Fixed- 00:09.0 IDE interface: NVIDIA Corporation MCP67 AHCI Controller (rev a2) (prog-if 85 [Master SecO PriO]) Subsystem: Biostar Microtech Int'l Corp Device 5407 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 23 I/O ports at 09f0 [size=8] I/O ports at 0bf0 [size=4] I/O ports at 0970 [size=8] I/O ports at 0b70 [size=4] I/O ports at dc00 [size=16] Memory at fe02a000 (32-bit, non-prefetchable) [size=8K] Capabilities: [44] Power Management version 2 Capabilities: [8c] SATA HBA v1.0 Capabilities: [b0] MSI: Enable- Count=1/8 Maskable- 64bit+ Capabilities: [cc] HyperTransport: MSI Mapping Enable- Fixed+ Kernel driver in use: ahci 00:0b.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=02, subordinate=02, sec-latency=0 I/O behind bridge: 0000b000-0000bfff Memory behind bridge: fde00000-fdefffff Prefetchable memory behind bridge: 00000000fdd00000-00000000fddfffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:0c.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=03, subordinate=03, sec-latency=0 I/O behind bridge: 0000a000-0000afff Memory behind bridge: fdc00000-fdcfffff Prefetchable memory behind bridge: 00000000fdb00000-00000000fdbfffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:0d.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=04, subordinate=04, sec-latency=0 I/O behind bridge: 00009000-00009fff Memory behind bridge: fda00000-fdafffff Prefetchable memory behind bridge: 00000000fd900000-00000000fd9fffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:0e.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=05, subordinate=05, sec-latency=0 I/O behind bridge: 00008000-00008fff Memory behind bridge: fd800000-fd8fffff Prefetchable memory behind bridge: 00000000fd700000-00000000fd7fffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:0f.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=06, subordinate=06, sec-latency=0 I/O behind bridge: 00007000-00007fff Memory behind bridge: fd600000-fd6fffff Prefetchable memory behind bridge: 00000000fd500000-00000000fd5fffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:10.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=07, subordinate=07, sec-latency=0 I/O behind bridge: 00006000-00006fff Memory behind bridge: fd400000-fd4fffff Prefetchable memory behind bridge: 00000000fd300000-00000000fd3fffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:11.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=08, subordinate=08, sec-latency=0 I/O behind bridge: 00005000-00005fff Memory behind bridge: fd200000-fd2fffff Prefetchable memory behind bridge: 00000000fd100000-00000000fd1fffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:12.0 VGA compatible controller: NVIDIA Corporation C68 [GeForce 7050 PV / nForce 630a] (rev a2) (prog-if 00 [VGA controller]) Subsystem: Biostar Microtech Int'l Corp Device 1406 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 21 Memory at fb000000 (32-bit, non-prefetchable) [size=16M] Memory at e0000000 (64-bit, prefetchable) [size=256M] Memory at fc000000 (64-bit, non-prefetchable) [size=16M] [virtual] Expansion ROM at 80000000 [disabled] [size=128K] Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+ Kernel driver in use: nvidia Kernel modules: nvidia_current, nouveau, nvidiafb 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration Flags: fast devsel Capabilities: [80] HyperTransport: Host or Secondary Interface 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map Flags: fast devsel 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller Flags: fast devsel 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control Flags: fast devsel Capabilities: [f0] Secure device <?> Kernel driver in use: k8temp Kernel modules: k8temp 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 01) Subsystem: Biostar Microtech Int'l Corp Device 2305 Flags: bus master, fast devsel, latency 0, IRQ 47 I/O ports at ac00 [size=256] Memory at fdcff000 (64-bit, non-prefetchable) [size=4K] [virtual] Expansion ROM at fdb00000 [disabled] [size=128K] Capabilities: [40] Power Management version 2 Capabilities: [48] Vital Product Data Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] Express Endpoint, MSI 00 Capabilities: [84] Vendor Specific Information: Len=4c <?> Capabilities: [100] Advanced Error Reporting Capabilities: [12c] Virtual Channel Capabilities: [148] Device Serial Number 32-00-00-00-10-ec-81-68 Capabilities: [154] Power Budgeting <?> Kernel driver in use: r8169 Kernel modules: r8169 sudo rfkill list all 2: phy2: Wireless LAN Soft blocked: no Hard blocked: no

    Read the article

  • Get Started using Build-Deploy-Test Workflow with TFS 2012

    - by Jakob Ehn
    TFS 2012 introduces a new type of Lab environment called Standard Environment. This allows you to setup a full Build Deploy Test (BDT) workflow that will build your application, deploy it to your target machine(s) and then run a set of tests on that server to verify the deployment. In TFS 2010, you had to use System Center Virtual Machine Manager and involve half of your IT department to get going. Now all you need is a server (virtual or physical) where you want to deploy and test your application. You don’t even have to install a test agent on the machine, TFS 2012 will do this for you! Although each step is rather simple, the entire process of setting it up consists of a bunch of steps. So I thought that it could be useful to run through a typical setup.I will also link to some good guidance from MSDN on each topic. High Level Steps Install and configure Visual Studio 2012 Test Controller on Target Server Create Standard Environment Create Test Plan with Test Case Run Test Case Create Coded UI Test from Test Case Associate Coded UI Test with Test Case Create Build Definition using LabDefaultTemplate 1. Install and Configure Visual Studio 2012 Test Controller on Target Server First of all, note that you do not have to have the Test Controller running on the target server. It can be running on another server, as long as the Test Agent can communicate with the test controller and the test controller can communicate with the TFS server. If you have several machines in your environment (web server, database server etc..), the test controller can be installed either on one of those machines or on a dedicated machine. To install the test controller, simply mount the Visual Studio Agents media on the server and browse to the vstf_controller.exe file located in the TestController folder. Run through the installation, you might need to reboot the server since it installs .NET 4.5. When the test controller is installed, the Test Controller configuration tool will launch automatically (if it doesn’t, you can start it from the Start menu). Here you will supply the credentials of the account running the test controller service. Note that this account will be given the necessary permissions in TFS during the configuration. Make sure that you have entered a valid account by pressing the Test link. Also, you have to register the test controller with the TFS collection where your test plan is located (and usually the code base of course) When you press Apply Settings, all the configuration will be done. You might get some warnings at the end, that might or might not cause a problem later. Be sure to read them carefully.   For more information about configuring your test controllers, see Setting Up Test Controllers and Test Agents to Manage Tests with Visual Studio 2. Create Standard Environment Now you need to create a Lab environment in Microsoft Test Manager. Since we are using an existing physical or virtual machine we will create a Standard Environment. Open MTM and go to Lab Center. Click New to create a new environment Enter a name for the environment. Since this environment will only contain one machine, we will use the machine name for the environment (TargetServer in this case) On the next page, click Add to add a machine to the environment. Enter the name of the machine (TargetServer.Domain.Com), and give it the Web Server role. The name must be reachable both from your machine during configuration and from the TFS app tier server. You also need to supply an account that is a local administration on the target server. This is needed in order to automatically install a test agent later on the machine. On the next page, you can add tags to the machine. This is not needed in this scenario so go to the next page. Here you will specify which test controller to use and that you want to run UI tests on this environment. This will in result in a Test Agent being automatically installed and configured on the target server. The name of the machine where you installed the test controller should be available on the drop down list (TargetServer in this sample). If you can’t see it, you might have selected a different TFS project collection. Press Next twice and then Verify to verify all the settings: Press finish. This will now create and prepare the environment, which means that it will remote install a test agent on the machine. As part of this installation, the remote server will be restarted. 3-5. Create Test Plan, Run Test Case, Create Coded UI Test I will not cover step 3-5 here, there are plenty of information on how you create test plans and test cases and automate them using Coded UI Tests. In this example I have a test plan called My Application and it contains among other things a test suite called Automated Tests where I plan to put test cases that should be automated and executed as part of the BDT workflow. For more information about Coded UI Tests, see Verifying Code by Using Coded User Interface Tests   6. Associate Coded UI Test with Test Case OK, so now we want to automate our Coded UI Test and have it run as part of the BDT workflow. You might think that you coded UI test already is automated, but the meaning of the term here is that you link your coded UI Test to an existing Test Case, thereby making the Test Case automated. And the test case should be part of the test suite that we will run during the BDT. Open the solution that contains the coded UI test method. Open the Test Case work item that you want to automate. Go to the Associated Automation tab and click on the “…” button. Select the coded UI test that you corresponds to the test case: Press OK and the save the test case For more information about associating an automated test case with a test case, see How to: Associate an Automated Test with a Test Case 7. Create Build Definition using LabDefaultTemplate Now we are ready to create a build definition that will implement the full BDT workflow. For this purpose we will use the LabDefaultTemplate.11.xaml that comes out of the box in TFS 2012. This build process template lets you take the output of another build and deploy it to each target machine. Since the deployment process will be running on the target server, you will have less problem with permissions and firewalls than if you were to remote deploy your solution. So, before creating a BDT workflow build definition, make sure that you have an existing build definition that produces a release build of your application. Go to the Builds hub in Team Explorer and select New Build Definition Give the build definition a meaningful name, here I called it MyApplication.Deploy Set the trigger to Manual Define a workspace for the build definition. Note that a BDT build doesn’t really need a workspace, since all it does is to launch another build definition and deploy the output of that build. But TFS doesn’t allow you to save a build definition without adding at least one mapping. On Build Defaults, select the build controller. Since this build actually won’t produce any output, you can select the “This build does not copy output files to a drop folder” option. On the process tab, select the LabDefaultTemplate.11.xaml. This is usually located at $/TeamProject/BuildProcessTemplates/LabDefaultTemplate.11.xaml. To configure it, press the … button on the Lab Process Settings property First, select the environment that you created before: Select which build that you want to deploy and test. The “Select an existing build” option is very useful when developing the BDT workflow, because you do not have to run through the target build every time, instead it will basically just run through the deployment and test steps which speeds up the process. Here I have selected to queue a new build of the MyApplication.Test build definition On the deploy tab, you need to specify how the application should be installed on the target server. You can supply a list of deployment scripts with arguments that will be executed on the target server. In this example I execute the generated web deploy command file to deploy the solution. If you for example have databases you can use sqlpackage.exe to deploy the database. If you are producing MSI installers in your build, you can run them using msiexec.exe and so on. A good practice is to create a batch file that contain the entire deployment that you can run both locally and on the target server. Then you would just execute the deployment batch file here in one single step. The workflow defines some variables that are useful when running the deployments. These variables are: $(BuildLocation) The full path to where your build files are located $(InternalComputerName_<VM Name>) The computer name for a virtual machine in a SCVMM environment $(ComputerName_<VM Name>) The fully qualified domain name of the virtual machine As you can see, I specify the path to the myapplication.deploy.cmd file using the $(BuildLocation) variable, which is the drop folder of the MyApplication.Test build. Note: The test agent account must have read permission in this drop location. You can find more information here on Building your Deployment Scripts On the last tab, we specify which tests to run after deployment. Here I select the test plan and the Automated Tests test suite that we saw before: Note that I also selected the automated test settings (called TargetServer in this case) that I have defined for my test plan. In here I define what data that should be collected as part of the test run. For more information about test settings, see Specifying Test Settings for Microsoft Test Manager Tests We are done! Queue your BDT build and wait for it to finish. If the build succeeds, your build summary should look something like this:

    Read the article

  • Java JRE 1.6.0_65 Certified with Oracle E-Business Suite

    - by Steven Chan (Oracle Development)
    The latest Java Runtime Environment 1.6.0_65 (a.k.a. JRE 6u65-b14) and later updates on the JRE 6 codeline are now certified with Oracle E-Business Suite Release 11i and 12 for Windows-based desktop clients. Effects of new support dates on Java upgrades for EBS environments Support dates for the E-Business Suite and Java have changed.  Please review the sections below for more details: What does this mean for Oracle E-Business Suite users? Will EBS users be forced to upgrade to JRE 7 for Windows desktop clients? Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers? All JRE 6 and 7 releases are certified with EBS upon release Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops from JRE 1.6.0_03 and later updates on the 1.6 codeline, and from JRE 7u10 and later updates on the JRE 7 codeline.  We test all new JRE 1.6 and JRE 7 releases in parallel with the JRE development process, so all new JRE 1.6 and 7 releases are considered certified with the E-Business Suite on the same day that they're released by our Java team.  You do not need to wait for a certification announcement before applying new JRE 1.6 or JRE 7 releases to your EBS users' desktops. What's new in in this Java release?Java 6 is now available only via My Oracle Support for E-Business Suite users.  You can find links to this release, including Release Notes, documentation, and the actual Java downloads here: All Java SE Downloads on MOS (Note 1439822.1) 32-bit and 64-bit versions certified This certification includes both the 32-bit and 64-bit JRE versions. 32-bit JREs are certified on: Windows XP Service Pack 3 (SP3) Windows Vista Service Pack 1 (SP1) and Service Pack 2 (SP2) Windows 7 and Windows 7 Service Pack 1 (SP1) 64-bit JREs are certified only on 64-bit versions of Windows 7 and Windows 7 Service Pack 1 (SP1). Worried about the 'mismanaged session cookie' issue? No need to worry -- it's fixed.  To recap: JRE releases 1.6.0_18 through 1.6.0_22 had issues with mismanaging session cookies that affected some users in some circumstances. The fix for those issues was first included in JRE 1.6.0_23. These fixes will carry forward and continue to be fixed in all future JRE releases.  In other words, if you wish to avoid the mismanaged session cookie issue, you should apply any release after JRE 1.6.0_22. Implications of Java 6 End of Public Updates for EBS Users The Support Roadmap for Oracle Java is published here: Oracle Java SE Support Roadmap The latest updates to that page (as of Sept. 19, 2012) state (emphasis added): Java SE 6 End of Public Updates Notice After February 2013, Oracle will no longer post updates of Java SE 6 to its public download sites. Existing Java SE 6 downloads already posted as of February 2013 will remain accessible in the Java Archive on Oracle Technology Network. Developers and end-users are encouraged to update to more recent Java SE versions that remain available for public download. For enterprise customers, who need continued access to critical bug fixes and security fixes as well as general maintenance for Java SE 6 or older versions, long term support is available through Oracle Java SE Support . What does this mean for Oracle E-Business Suite users? EBS users fall under the category of "enterprise users" above.  Java is an integral part of the Oracle E-Business Suite technology stack, so EBS users will continue to receive Java SE 6 updates from February 2013 to the end of Java SE 6 Extended Support in June 2017. In other words, nothing changes for EBS users after February 2013.  EBS users will continue to receive critical bug fixes and security fixes as well as general maintenance for Java SE 6 until the end of Java SE 6 Extended Support in June 2017.  How can EBS customers obtain Java 6 updates after the public end-of-life? EBS customers can download Java 6 patches from My Oracle Support.  For a complete list of all Java SE patch numbers, see: All Java SE Downloads on MOS (Note 1439822.1) Will EBS users be forced to upgrade to JRE 7 for Windows desktop clients? This upgrade is highly recommended but remains optional while Java 6 is covered by Extended Support. Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JRE 6 desktop clients.  Java 6 is covered by Extended Support until June 2017.  All E-Business Suite customers must upgrade to JRE 7 by June 2017. Coexistence of JRE 6 and JRE 7 on Windows desktops The upgrade to JRE 7 is highly recommended for EBS users, but some users may need to run both JRE 6 and 7 on their Windows desktops for reasons unrelated to the E-Business Suite. Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 7 will be invoked instead of JRE 6 if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290807.1 and 393931.1. Applying Updates to JRE 6 and JRE 7 to Windows desktops Auto-update will keep JRE 7 up-to-date for Windows users with JRE 7 installed. Auto-update will only keep JRE 7 up-to-date for Windows users with both JRE 6 and 7 installed.  JRE 6 users are strongly encouraged to apply the latest Critical Patch Updates as soon as possible after each release. The Jave SE CPUs will be available via My Oracle Support.  EBS users can find more information about JRE 6 and 7 updates here: Information Center: Installation & Configuration for Oracle Java SE (Note 1412103.2) The dates for future Java SE CPUs can be found on the Critical Patch Updates, Security Alerts and Third Party Bulletin.  An RSS feed is available on that site for those who would like to be kept up-to-date. What do Mac users need? Mac users running Mac OS 10.7 or 10.8 can run JRE 7 plug-ins.  See this article: EBS 12 certified with Mac OS X 10.7 and 10.8 with Safari 6 and JRE 7 Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers? JRE is used for desktop clients.  JDK is used for application tier servers JDK upgrades for E-Business Suite application tier servers are highly recommended but currently remain optional while Java 6 is covered by Extended Support. Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JDK 6 for application tier servers.  Java SE 6 is covered by Extended Support until June 2017.  All EBS customers with application tier servers on Windows, Solaris, and Linux must upgrade to JDK 7 by June 2017. EBS customers running their application tier servers on other operating systems should check with their respective vendors for the support dates for those platforms. JDK 7 is certified with E-Business Suite 12.  See: Java (JDK) 7 Certified for E-Business Suite 12 Servers References Recommended Browsers for Oracle Applications 11i (Metalink Note 285218.1) Upgrading Sun JRE (Native Plug-in) with Oracle Applications 11i for Windows Clients (Metalink Note 290807.1) Recommended Browsers for Oracle Applications 12 (MetaLink Note 389422.1) Upgrading JRE Plugin with Oracle Applications R12 (MetaLink Note 393931.1) Related Articles Mismanaged Session Cookie Issue Fixed for EBS in JRE 1.6.0_23 Roundup: Oracle JInitiator 1.3 Desupported for EBS Customers in July 2009

    Read the article

  • C#/.NET Little Wonders: Tuples and Tuple Factory Methods

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can really help improve your code by making it easier to write and maintain.  This week, we look at the System.Tuple class and the handy factory methods for creating a Tuple by inferring the types. What is a Tuple? The System.Tuple is a class that tends to inspire a reaction in one of two ways: love or hate.  Simply put, a Tuple is a data structure that holds a specific number of items of a specific type in a specific order.  That is, a Tuple<int, string, int> is a tuple that contains exactly three items: an int, followed by a string, followed by an int.  The sequence is important not only to distinguish between two members of the tuple with the same type, but also for comparisons between tuples.  Some people tend to love tuples because they give you a quick way to combine multiple values into one result.  This can be handy for returning more than one value from a method (without using out or ref parameters), or for creating a compound key to a Dictionary, or any other purpose you can think of.  They can be especially handy when passing a series of items into a call that only takes one object parameter, such as passing an argument to a thread's startup routine.  In these cases, you do not need to define a class, simply create a tuple containing the types you wish to return, and you are ready to go? On the other hand, there are some people who see tuples as a crutch in object-oriented design.  They may view the tuple as a very watered down class with very little inherent semantic meaning.  As an example, what if you saw this in a piece of code: 1: var x = new Tuple<int, int>(2, 5); What are the contents of this tuple?  If the tuple isn't named appropriately, and if the contents of each member are not self evident from the type this can be a confusing question.  The people who tend to be against tuples would rather you explicitly code a class to contain the values, such as: 1: public sealed class RetrySettings 2: { 3: public int TimeoutSeconds { get; set; } 4: public int MaxRetries { get; set; } 5: } Here, the meaning of each int in the class is much more clear, but it's a bit more work to create the class and can clutter a solution with extra classes. So, what's the correct way to go?  That's a tough call.  You will have people who will argue quite well for one or the other.  For me, I consider the Tuple to be a tool to make it easy to collect values together easily.  There are times when I just need to combine items for a key or a result, in which case the tuple is short lived and so the meaning isn't easily lost and I feel this is a good compromise.  If the scope of the collection of items, though, is more application-wide I tend to favor creating a full class. Finally, it should be noted that tuples are immutable.  That means they are assigned a value at construction, and that value cannot be changed.  Now, of course if the tuple contains an item of a reference type, this means that the reference is immutable and not the item referred to. Tuples from 1 to N Tuples come in all sizes, you can have as few as one element in your tuple, or as many as you like.  However, since C# generics can't have an infinite generic type parameter list, any items after 7 have to be collapsed into another tuple, as we'll show shortly. So when you declare your tuple from sizes 1 (a 1-tuple or singleton) to 7 (a 7-tuple or septuple), simply include the appropriate number of type arguments: 1: // a singleton tuple of integer 2: Tuple<int> x; 3:  4: // or more 5: Tuple<int, double> y; 6:  7: // up to seven 8: Tuple<int, double, char, double, int, string, uint> z; Anything eight and above, and we have to nest tuples inside of tuples.  The last element of the 8-tuple is the generic type parameter Rest, this is special in that the Tuple checks to make sure at runtime that the type is a Tuple.  This means that a simple 8-tuple must nest a singleton tuple (one of the good uses for a singleton tuple, by the way) for the Rest property. 1: // an 8-tuple 2: Tuple<int, int, int, int, int, double, char, Tuple<string>> t8; 3:  4: // an 9-tuple 5: Tuple<int, int, int, int, double, int, char, Tuple<string, DateTime>> t9; 6:  7: // a 16-tuple 8: Tuple<int, int, int, int, int, int, int, Tuple<int, int, int, int, int, int, int, Tuple<int,int>>> t14; Notice that on the 14-tuple we had to have a nested tuple in the nested tuple.  Since the tuple can only support up to seven items, and then a rest element, that means that if the nested tuple needs more than seven items you must nest in it as well.  Constructing tuples Constructing tuples is just as straightforward as declaring them.  That said, you have two distinct ways to do it.  The first is to construct the tuple explicitly yourself: 1: var t3 = new Tuple<int, string, double>(1, "Hello", 3.1415927); This creates a triple that has an int, string, and double and assigns the values 1, "Hello", and 3.1415927 respectively.  Make sure the order of the arguments supplied matches the order of the types!  Also notice that we can't half-assign a tuple or create a default tuple.  Tuples are immutable (you can't change the values once constructed), so thus you must provide all values at construction time. Another way to easily create tuples is to do it implicitly using the System.Tuple static class's Create() factory methods.  These methods (much like C++'s std::make_pair method) will infer the types from the method call so you don't have to type them in.  This can dramatically reduce the amount of typing required especially for complex tuples! 1: // this 4-tuple is typed Tuple<int, double, string, char> 2: var t4 = Tuple.Create(42, 3.1415927, "Love", 'X'); Notice how much easier it is to use the factory methods and infer the types?  This can cut down on typing quite a bit when constructing tuples.  The Create() factory method can construct from a 1-tuple (singleton) to an 8-tuple (octuple), which of course will be a octuple where the last item is a singleton as we described before in nested tuples. Accessing tuple members Accessing a tuple's members is simplicity itself… mostly.  The properties for accessing up to the first seven items are Item1, Item2, …, Item7.  If you have an octuple or beyond, the final property is Rest which will give you the nested tuple which you can then access in a similar matter.  Once again, keep in mind that these are read-only properties and cannot be changed. 1: // for septuples and below, use the Item properties 2: var t1 = Tuple.Create(42, 3.14); 3:  4: Console.WriteLine("First item is {0} and second is {1}", 5: t1.Item1, t1.Item2); 6:  7: // for octuples and above, use Rest to retrieve nested tuple 8: var t9 = new Tuple<int, int, int, int, int, int, int, 9: Tuple<int, int>>(1,2,3,4,5,6,7,Tuple.Create(8,9)); 10:  11: Console.WriteLine("The 8th item is {0}", t9.Rest.Item1); Tuples are IStructuralComparable and IStructuralEquatable Most of you know about IComparable and IEquatable, what you may not know is that there are two sister interfaces to these that were added in .NET 4.0 to help support tuples.  These IStructuralComparable and IStructuralEquatable make it easy to compare two tuples for equality and ordering.  This is invaluable for sorting, and makes it easy to use tuples as a compound-key to a dictionary (one of my favorite uses)! Why is this so important?  Remember when we said that some folks think tuples are too generic and you should define a custom class?  This is all well and good, but if you want to design a custom class that can automatically order itself based on its members and build a hash code for itself based on its members, it is no longer a trivial task!  Thankfully the tuple does this all for you through the explicit implementations of these interfaces. For equality, two tuples are equal if all elements are equal between the two tuples, that is if t1.Item1 == t2.Item1 and t1.Item2 == t2.Item2, and so on.  For ordering, it's a little more complex in that it compares the two tuples one at a time starting at Item1, and sees which one has a smaller Item1.  If one has a smaller Item1, it is the smaller tuple.  However if both Item1 are the same, it compares Item2 and so on. For example: 1: var t1 = Tuple.Create(1, 3.14, "Hi"); 2: var t2 = Tuple.Create(1, 3.14, "Hi"); 3: var t3 = Tuple.Create(2, 2.72, "Bye"); 4:  5: // true, t1 == t2 because all items are == 6: Console.WriteLine("t1 == t2 : " + t1.Equals(t2)); 7:  8: // false, t1 != t2 because at least one item different 9: Console.WriteLine("t2 == t2 : " + t2.Equals(t3)); The actual implementation of IComparable, IEquatable, IStructuralComparable, and IStructuralEquatable is explicit, so if you want to invoke the methods defined there you'll have to manually cast to the appropriate interface: 1: // true because t1.Item1 < t3.Item1, if had been same would check Item2 and so on 2: Console.WriteLine("t1 < t3 : " + (((IComparable)t1).CompareTo(t3) < 0)); So, as I mentioned, the fact that tuples are automatically equatable and comparable (provided the types you use define equality and comparability as needed) means that we can use tuples for compound keys in hashing and ordering containers like Dictionary and SortedList: 1: var tupleDict = new Dictionary<Tuple<int, double, string>, string>(); 2:  3: tupleDict.Add(t1, "First tuple"); 4: tupleDict.Add(t2, "Second tuple"); 5: tupleDict.Add(t3, "Third tuple"); Because IEquatable defines GetHashCode(), and Tuple's IStructuralEquatable implementation creates this hash code by combining the hash codes of the members, this makes using the tuple as a complex key quite easy!  For example, let's say you are creating account charts for a financial application, and you want to cache those charts in a Dictionary based on the account number and the number of days of chart data (for example, a 1 day chart, 1 week chart, etc): 1: // the account number (string) and number of days (int) are key to get cached chart 2: var chartCache = new Dictionary<Tuple<string, int>, IChart>(); Summary The System.Tuple, like any tool, is best used where it will achieve a greater benefit.  I wouldn't advise overusing them, on objects with a large scope or it can become difficult to maintain.  However, when used properly in a well defined scope they can make your code cleaner and easier to maintain by removing the need for extraneous POCOs and custom property hashing and ordering. They are especially useful in defining compound keys to IDictionary implementations and for returning multiple values from methods, or passing multiple values to a single object parameter. Tweet Technorati Tags: C#,.NET,Tuple,Little Wonders

    Read the article

  • How do I stop XNA/Visual Studio from rebuilding my content project every time I build?

    - by Phil Quinn
    My group and I are working on a game in XNA 4.0 with Visual Studio 2010/2012. The main solution has 6 projects: 2 XNA game projects (1 executable/ 1 class library), 1 WPF executable for the level editor, 2 standard class libraries, and a content project. Originally, the editor and engine XNA game projects had a content reference to separate content projects. Recently, I consolidated the content projects into one to simplify asset additions. Since pushing these changes to our git repo, certain members of my group have been experiencing weird build issues. Every time they run the project, they have to re-build all of the assets. This happens regardless of whether any changes were made, even if they just run the project directly after building. I've taken a few steps to figure out why this is happening. Below is the MSBuild output set on Normal verbosity. The seemingly important part is at 4, with the line 4> Rebuilding all content because build settings have changed 1>------ Build started: Project: Engine.Core, Configuration: Debug x86 ------ 1>Build started 11/29/2012 3:24:24 AM. 1>ResolveAssemblyReferences: 1> A TargetFramework profile exclusion list will be generated. 1>EmbedXnaFrameworkRuntimeProfile: 1>Skipping target "EmbedXnaFrameworkRuntimeProfile" because all output files are up-to-date with respect to the input files. 1>GenerateTargetFrameworkMonikerAttribute: 1>Skipping target "GenerateTargetFrameworkMonikerAttribute" because all output files are up-to-date with respect to the input files. 1>CoreCompile: 1>Skipping target "CoreCompile" because all output files are up-to-date with respect to the input files. 1>XnaWriteCacheFile: 1>Skipping target "XnaWriteCacheFile" because all output files are up-to-date with respect to the input files. 1>_CopyOutOfDateSourceItemsToOutputDirectoryAlways: 1> Copying file from "<solution-dir>\src\Engine.Core\DialoguePrototypeTestDB.s3db" to "bin\x86\Debug\DialoguePrototypeTestDB.s3db". 1>_CopyAppConfigFile: 1>Skipping target "_CopyAppConfigFile" because all output files are up-to-date with respect to the input files. 1>CopyFilesToOutputDirectory: 1> Engine.Core -> <solution-dir>\src\Engine.Core\bin\x86\Debug\TimeSink.Engine.Core.dll 1> 1>Build succeeded. 1> 1>Time Elapsed 00:00:00.13 2>------ Build started: Project: TimeSink.Entities, Configuration: Debug x86 ------ 2>Build started 11/29/2012 3:24:25 AM. 2>ResolveAssemblyReferences: 2> A TargetFramework profile exclusion list will be generated. 2>EmbedXnaFrameworkRuntimeProfile: 2>Skipping target "EmbedXnaFrameworkRuntimeProfile" because all output files are up-to-date with respect to the input files. 2>GenerateTargetFrameworkMonikerAttribute: 2>Skipping target "GenerateTargetFrameworkMonikerAttribute" because all output files are up-to-date with respect to the input files. 2>CoreCompile: 2>Skipping target "CoreCompile" because all output files are up-to-date with respect to the input files. 2>XnaWriteCacheFile: 2>Skipping target "XnaWriteCacheFile" because all output files are up-to-date with respect to the input files. 2>_CopyOutOfDateSourceItemsToOutputDirectoryAlways: 2> Copying file from "<solution-dir>\src\Engine.Core\DialoguePrototypeTestDB.s3db" to "bin\x86\Debug\DialoguePrototypeTestDB.s3db". 2>CopyFilesToOutputDirectory: 2> TimeSink.Entities -> <solution-dir>\src\TimeSink.Entities\bin\x86\Debug\TimeSink.Entities.dll 2> 2>Build succeeded. 2> 2>Time Elapsed 00:00:00.11 3>------ Build started: Project: Editor (Editor\Editor), Configuration: Debug x86 ------ 4>------ Build started: Project: Engine.Game, Configuration: Debug x86 ------ 3>Build started 11/29/2012 3:24:25 AM. 3>CoreCompile: 3> All content is already up to date 3>ResolveAssemblyReferences: 3> A TargetFramework profile exclusion list will be generated. 3>EmbedXnaFrameworkRuntimeProfile: 3>Skipping target "EmbedXnaFrameworkRuntimeProfile" because all output files are up-to-date with respect to the input files. 3>GenerateTargetFrameworkMonikerAttribute: 3>Skipping target "GenerateTargetFrameworkMonikerAttribute" because all output files are up-to-date with respect to the input files. 3>CoreCompile: 3>Skipping target "CoreCompile" because all output files are up-to-date with respect to the input files. 3>XnaWriteCacheFile: 3>Skipping target "XnaWriteCacheFile" because all output files are up-to-date with respect to the input files. 3>_CopyOutOfDateSourceItemsToOutputDirectoryAlways: 3> Copying file from "<solution-dir>\src\Engine.Core\DialoguePrototypeTestDB.s3db" to "bin\x86\Debug\DialoguePrototypeTestDB.s3db". 3>_CopyOutOfDateNestedContentItemsToOutputDirectory: 3>Skipping target "_CopyOutOfDateNestedContentItemsToOutputDirectory" because all output files are up-to-date with respect to the input files. 3>CopyFilesToOutputDirectory: 3> Editor -> <solution-dir>\src\Editor\Editor\bin\x86\Debug\Editor.dll 3> 3>Build succeeded. 3> 3>Time Elapsed 00:00:00.39 4>Build started 11/29/2012 3:24:25 AM. 4>CoreCompile: 4> Rebuilding all content because build settings have changed 4> Building Textures\circle.png -> <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\circle.xnb 4> Importing Textures\circle.png with Microsoft.Xna.Framework.Content.Pipeline.TextureImporter 4> Processing Textures\circle.png with Microsoft.Xna.Framework.Content.Pipeline.Processors.TextureProcessor 4> Compiling <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\circle.xnb 4> Building Textures\giroux.png -> <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\giroux.xnb 4> Importing Textures\giroux.png with Microsoft.Xna.Framework.Content.Pipeline.TextureImporter 4> Processing Textures\giroux.png with Microsoft.Xna.Framework.Content.Pipeline.Processors.TextureProcessor 4> Compiling <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\giroux.xnb 4> Building Textures\Body_Neutral.png -> <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\Body_Neutral.xnb 4> Importing Textures\Body_Neutral.png with Microsoft.Xna.Framework.Content.Pipeline.TextureImporter 4> Processing Textures\Body_Neutral.png with Microsoft.Xna.Framework.Content.Pipeline.Processors.TextureProcessor 4> Compiling <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\Body_Neutral.xnb 4> Building font.spritefont -> <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\font.xnb 4> Importing font.spritefont with Microsoft.Xna.Framework.Content.Pipeline.FontDescriptionImporter 4> Processing font.spritefont with Microsoft.Xna.Framework.Content.Pipeline.Processors.FontDescriptionProcessor 4> Compiling <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\font.xnb 4>ResolveAssemblyReferences: 4> A TargetFramework profile exclusion list will be generated. 4>EmbedXnaFrameworkRuntimeProfile: 4>Skipping target "EmbedXnaFrameworkRuntimeProfile" because all output files are up-to-date with respect to the input files. 4>GenerateTargetFrameworkMonikerAttribute: 4>Skipping target "GenerateTargetFrameworkMonikerAttribute" because all output files are up-to-date with respect to the input files. 4>CoreCompile: 4>Skipping target "CoreCompile" because all output files are up-to-date with respect to the input files. 4>_CopyOutOfDateSourceItemsToOutputDirectoryAlways: 4> Copying file from "<solution-dir>\src\Engine.Core\DialoguePrototypeTestDB.s3db" to "bin\x86\Debug\DialoguePrototypeTestDB.s3db". 4>_CopyOutOfDateNestedContentItemsToOutputDirectory: 4>Skipping target "_CopyOutOfDateNestedContentItemsToOutputDirectory" because all output files are up-to-date with respect to the input files. 4>_CopyAppConfigFile: 4>Skipping target "_CopyAppConfigFile" because all output files are up-to-date with respect to the input files. 4>CopyFilesToOutputDirectory: 4> Engine.Game -> <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Engine.Game.exe 4>IncrementalClean: 4> Deleting file "<solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\circle.xnb". 4> Deleting file "<solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\giroux.xnb". 4> Deleting file "<solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Body_Neutral.xnb". 4> Deleting file "<solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\font.xnb". 4> 4>Build succeeded. 4> 4>Time Elapsed 00:00:01.72 ========== Build: 4 succeeded, 0 failed, 1 up-to-date, 0 skipped ========== I can't think of how build settings could change between consecutive executions. Like I said, this only happens for half our group. One member is on a 32-bit Windows 7 Prof bootcamp partition on a Mac. Everyone else, including those who don't have the issue, are running straight 64-bit Windows 7 Prof. Both have tried using VS 2010 and VS 2012. Any insight would be greatly appreciated. Also, I can post more details upon request if this isn't thorough enough.

    Read the article

  • CodePlex Daily Summary for Sunday, December 05, 2010

    CodePlex Daily Summary for Sunday, December 05, 2010Popular ReleasesSubtitleTools: SubtitleTools 1.0: First public releaseMiniTwitter: 1.62: MiniTwitter 1.62 ???? ?? ??????????????????????????????????????? 140 ?????????????????????????? ???????????????????????????????? ?? ??????????????????????????????????Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (December 2010): The release is targetted for stable daily use. With improved performance and enhanced compatibility with several latest PHP open source applications; it makes this release perfect replacement of your old PHP runtime. Changes made within this release include following and much more: Performance improvements based on real-world applications experience. We determined biggest bottlenecks and we found and removed overheads causing performance problems in many PHP applications. Reimplemented nat...Chronos WPF: Chronos v2.0 Beta 3: Release notes: Updated introduction document. Updated Visual Studio 2010 Extension (vsix) package. Added horizontal scrolling to the main window TaskBar. Added new styles for ListView, ListViewItem, GridViewColumnHeader, ... Added a new WindowViewModel class (allowing to fetch data). Added a new Navigate method (with several overloads) to the NavigationViewModel class (protected). Reimplemented Task usage for the WorkspaceViewModel.OnDelete method. Removed the reflection effect...MDownloader: MDownloader-0.15.26.7024: Fixed updater; Fixed MegauploadDJ - jQuery WebControls for ASP.NET: DJ 1.2: What is new? Update to support jQuery 1.4.2 Update to support jQuery ui 1.8.6 Update to Visual Studio 2010 New WebControls with samples added Autocomplete WebControl Button WebControl ToggleButt WebControl The example web site is including in source code project.LateBindingApi.Excel: LateBindingApi.Excel Release 0.7g: Unterschiede zur Vorgängerversion: - Zusätzliche Interior Properties - Group / Ungroup Methoden für Range - Bugfix COM Reference Handling für Application Objekt in einigen Klassen Release+Samples V0.7g: - Enthält Laufzeit DLL und Beispielprojekte Beispielprojekte: COMAddinExample - Demonstriert ein versionslos angebundenes COMAddin Example01 - Background Colors und Borders für Cells Example02 - Font Attributes undAlignment für Cells Example03 - Numberformats Example04 - Shapes, WordArts, P...ESRI ArcGIS Silverlight Toolkit: November 2010 - v2.1: ESRI ArcGIS Silverlight Toolkit v2.1 Added Windows Phone 7 build. New controls added: InfoWindow ChildPage (Windows Phone 7 only) See what's new here full details for : http://help.arcgis.com/en/webapi/silverlight/help/#/What_s_new_in_2_1/016600000025000000/ Note: Requires Visual Studio 2010, .NET 4.0 and Silverlight 4.0.ASP .NET MVC CMS (Content Management System): Atomic CMS 2.1.1: Atomic CMS 2.1.1 release notes Atomic CMS installation guide Winware: Winware 3.0 (.Net 4.0): Winware 3.0 is base on .Net 4.0 with C#. Please open it with Visual Studio 2010. This release contains a lab web application.UltimateJB: UltimateJB 2.02 PL3 KAKAROTO + CE-X-3.41 EvilSperm: Voici une version attendu avec impatience pour beaucoup : - La Version CEX341 pour pouvoir jouer avec des jeux demandant le firmware 3.50 ( certain ne fonctionne tous simplement pas ). - Pour l'instant le CEX341 n'est disponible qu'avec les PS3 en firmwares 3.41 !!! - La version PL3 KAKAROTO intégre ses dernières modification et intégre maintenant le firmware 3.30 !!! Conclusion : - UltimateJB CEX341 => Spoof le Firmware 3.41 en 3.50 ( facilite l'utilisation de certain jeux avec openManage...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.5 beta Released: Hi, Today we are releasing Visifire 3.6.5 beta with the following new feature: New property AutoFitToPlotArea has been introduced in DataSeries. AutoFitToPlotArea will bring bubbles inside the PlotArea in order to avoid clipping of bubbles in bubble chart. Also this release includes few bug fixes: AxisXLabel label were getting clipped if angle was set for AxisLabels and ScrollingEnabled was not set in Chart. If LabelStyle property was set as 'Inside', size of the Pie was not proper. Yo...EnhSim: EnhSim 2.1.1: 2.1.1This release adds in the changes for 4.03a. To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Switched Searing Flames bac...AI: Initial 0.0.1: It’s simply just one code file; it simulates AI and machine in a simulated world. The AI has a little understanding of its body machine and parts, and able to use its feet to do actions just start and stop walking. The world is all of white with nothing but just the machine on a white planet. Colors, odors and position information make no sense. I’m previous C# programmer and I’m learning F# during this project, although I’m still not a good F# programmer, in this project I learning to prog...NKinect: NKinect Preview: Build features: Accelerometer reading Motor serial number property Realtime image update Realtime depth calculation Export to PLY (On demand) Control motor LED Control Kinect tiltMicrosoft - Domain Oriented N-Layered .NET 4.0 App Sample (Microsoft Spain): V1.0 - N-Layer DDD Sample App .NET 4.0: Required Software (Microsoft Base Software needed for Development environment) Visual Studio 2010 RTM & .NET 4.0 RTM (Final Versions) Expression Blend 4 SQL Server 2008 R2 Express/Standard/Enterprise Unity Application Block 2.0 - Published May 5th 2010 http://www.microsoft.com/downloads/en/details.aspx?FamilyID=2D24F179-E0A6-49D7-89C4-5B67D939F91B&displaylang=en http://unity.codeplex.com/releases/view/31277 PEX & MOLES 0.94.51023.0, 29/Oct/2010 - Visual Studio 2010 Power Tools http://re...Sense/Net Enterprise Portal & ECMS: SenseNet 6.0.1 Community Edition: Sense/Net 6.0.1 Community Edition This half year we have been working quite fiercely to bring you the long-awaited release of Sense/Net 6.0. Download this Community Edition to see what we have been up to. These months we have worked on getting the WebCMS capabilities of Sense/Net 6.0 up to par. New features include: New, powerful page and portlet editing experience. HTML and CSS cleanup, new, powerful site skinning system. Upgraded, lightning-fast indexing and query via Lucene. Limita...Minecraft GPS: Minecraft GPS 1.1.1: New Features Compass! New style. Set opacity on main window to allow overlay of Minecraft. Open World in any folder. Fixes Fixed style so listbox won't grow the window size. Fixed open file dialog issue on non-vista kernel machines.DotSpatial: DotSpatial 11-28-2001: This release introduces some exciting improvements. Support for big raster, both in display and changing the scheme. Faster raster scheme creation for all rasters. Caching of the "sample" values so once obtained the raster symbolizer dialog loads faster. Reprojection supported for raster and image classes. Affine transform fully supported for images and rasters, so skewed images are now possible. Projection uses better checks when loading unprojected layers. GDAL raster support f...SuperWebSocket: SuperWebSocket(60438): It is the first release of SuperWebSocket. Because it is base on SuperSocket, most features of SuperSocket are supported in SuperWebSocket. The source code include a LiveChat demo.New ProjectsBambook???: ????Bambook???????。Beespot: Beespot is an easy to use, secure, robust and powerful Honeypot for the SSH Service written in Python. caitanzhangDemo: this is my demoColorPicker [SA:MP]: ColorPicker [SA:MP] is a simple tool that generates: - PAWN Hex Color Codes (useful for SAMP Scripts); - ARGB Color Codes; - HTML Color Codes; It's developed in C#.Conversions-n-Stuff: Conversions-n-Stuff (CNS) is a program focused on making it easier for anyone to convert from one measurement to another. There is no need to know the calculations and formulas! Just fill in the forms and click, and you have your answer! CNS leverages C#, WPF, and Silverlight.dotP2P: dotP2P would consist of servers running caches to keep track of domain and nameserver records. Cache servers can be created with any server that supports XML-RPC or SOAP. MySQL is used to store the the cache data. EmailMasterTemplate: This user master and child user control based email template engine.Ezekiel: Ezekiel is a Windows application that leverages a user's existing BusinessObjects reports to provide a custom read-only front end for a database. It's developed using Visual C# 2010 Express.F# Colorizer Editor: Standalone Colorizer Editor for Brian's Fsharp Deep Colorizer VS ExtensionGameCore: Core engine for game services for mobile and RIA clientsGestão de contas bancárias: Trabalho final de Matematica Aplicada da UATLAGPP: GPPkmean: Kmeans ClusteringPHP ORM: ??????orm???,??PHP????,??????????????!Steampunk Odyssey: Steampunk Odyssey is a side-scrolling action game based on the XNA platformSubtitleTools: SubtitleTools is a small utility that helps modifying existing subtitles or downloading new ones based on the digital signatures of your movie files from opensubtitles.org site.Windows Phone 7 Accelerometer: Accelerometer for Windows Phone 7???: ????????????????????????

    Read the article

  • SQL SERVER – Data Sources and Data Sets in Reporting Services SSRS

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. This example is from the Beginning SSRS. Supporting files are available with a free download from the www.Joes2Pros.com web site. Connecting to Your Data? When I was a child, the telephone book was an important part of my life. Maybe I was just a nerd, but I enjoyed getting a new book every year to page through to learn about the businesses in my small town or to discover where some of my school acquaintances lived. It was also the source of maps to my town’s neighborhoods and the towns that surrounded me. To make a phone call, I would need a telephone number. In order to find a telephone number, I had to know how to use the telephone book. That seems pretty simple, but it resembles connecting to any data. You have to know where the data is and how to interact with it. A data source is the connection information that the report uses to connect to the database. You have two choices when creating a data source, whether to embed it in the report or to make it a shared resource usable by many reports. Data Sources and Data Sets A few basic terms will make the upcoming choses make more sense. What database on what server do you want to connect to? It would be better to just ask… “what is your data source?” The connection you need to make to get your reports data is called a data source. If you connected to a data source (like the JProCo database) there may be hundreds of tables. You probably only want data from just a few tables. This means you want to write a specific query against this data source. A query on a data source to get just the records you need for an SSRS report is called a Data Set. Creating a local Data Source You can connect embed a connection from your report directly to your JProCo database which (let’s say) is installed on a server named Reno. If you move JProCo to a new server named Tampa then you need to update the Data Set. If you have 10 reports in one project that were all pointing to the JProCo database on the Reno server then they would all need to be updated at once. It’s possible to make a project level Data Source and have each report use that. This means one change can fix all 10 reports at once. This would be called a Shared Data Source. Creating a Shared Data Source The best advice I can give you is to create shared data sources. The reason I recommend this is that if a database moves to a new server you will have just one place in Report Manager to make the server name change. That one change will update the connection information in all the reports that use that data source. To get started, you will start with a fresh project. Go to Start > All Programs > SQL Server 2012 > Microsoft SQL Server Data Tools to launch SSDT. Once SSDT is running, click New Project to create a new project. Once the New Project dialog box appears, fill in the form, as shown in. Be sure to select Report Server Project this time – not the wizard. Click OK to dismiss the New Project dialog box. You should now have an empty project, as shown in the Solution Explorer. A report is meant to show you data. Where is the data? The first task is to create a Shared Data Source. Right-click on the Shared Data Sources folder and choose Add New Data Source. The Shared Data Source Properties dialog box will launch where you can fill in a name for the data source. By default, it is named DataSource1. The best practice is to give the data source a more meaningful name. It is possible that you will have projects with more than one data source and, by naming them, you can tell one from another. Type the name JProCo for the data source name and click the Edit button to configure the database connection properties. If you take a look at the types of data sources you can choose, you will see that SSRS works with many data platforms including Oracle, XML, and Teradata. Make sure SQL Server is selected before continuing. For this post, I am assuming that you are using a local SQL Server and that you can use your Windows account to log in to the SQL Server. If, for some reason you must use SQL Server Authentication, choose that option and fill in your SQL Server account credentials. Otherwise, just accept Windows Authentication. If your database server was installed locally and with the default instance, just type in Localhost for the Server name. Select the JProCo database from the database list. At this point, the connection properties should look like. If you have installed a named instance of SQL Server, you will have to specify the server name like this: Localhost\InstanceName, replacing the InstanceName with whatever your instance name is. If you are not sure about the named instance, launch the SQL Server Configuration Manager found at Start > All Programs > Microsoft SQL Server 2012 > Configuration Tools. If you have a named instance, the name will be shown in parentheses. A default instance of SQL Server will display MSSQLSERVER; a named instance will display the name chosen during installation. Once you get the connection properties filled in, click OK to dismiss the Connection Properties dialog box and OK again to dismiss the Shared Data Source properties. You now have a data source in the Solution Explorer. What’s next I really need to thank Kathi Kellenberger and Rick Morelan for sharing this material for this 5 day series of posts on SSRS. To get really comfortable with SSRS you will get to know the different SSDT windows, Build reports on your own (without the wizards),  Add report headers and footers, Accept user input,  create levels, charts, or even maps for visual appeal. You might be surprise to know a small 230 page book starts from the very beginning and covers the steps to do all these items. Beginning SSRS 2012 is a small easy to follow book so you can learn SSRS for less than $20. See Joes2Pros.com for more on this and other books. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #032

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Complete Series of Database Coding Standards and Guidelines SQL SERVER Database Coding Standards and Guidelines – Introduction SQL SERVER – Database Coding Standards and Guidelines – Part 1 SQL SERVER – Database Coding Standards and Guidelines – Part 2 SQL SERVER Database Coding Standards and Guidelines Complete List Download Explanation and Example – SELF JOIN When all of the data you require is contained within a single table, but data needed to extract is related to each other in the table itself. Examples of this type of data relate to Employee information, where the table may have both an Employee’s ID number for each record and also a field that displays the ID number of an Employee’s supervisor or manager. To retrieve the data tables are required to relate/join to itself. Insert Multiple Records Using One Insert Statement – Use of UNION ALL This is very interesting question I have received from new developer. How can I insert multiple values in table using only one insert? Now this is interesting question. When there are multiple records are to be inserted in the table following is the common way using T-SQL. Function to Display Current Week Date and Day – Weekly Calendar Straight blog post with script to find current week date and day based on the parameters passed in the function.  2008 In my beginning years, I have almost same confusion as many of the developer had in their earlier years. Here are two of the interesting question which I have attempted to answer in my early year. Even if you are experienced developer may be you will still like to read following two questions: Order Of Column In Index Order of Conditions in WHERE Clauses Example of DISTINCT in Aggregate Functions Have you ever used DISTINCT with the Aggregation Function? Here is a simple example about how users can do it. Create a Comma Delimited List Using SELECT Clause From Table Column Straight to script example where I explained how to do something easy and quickly. Compound Assignment Operators SQL SERVER 2008 has introduced new concept of Compound Assignment Operators. Compound Assignment Operators are available in many other programming languages for quite some time. Compound Assignment Operators is operator where variables are operated upon and assigned on the same line. PIVOT and UNPIVOT Table Examples Here is a very interesting question – the answer to the question can be YES or NO both. “If we PIVOT any table and UNPIVOT that table do we get our original table?” Read the blog post to get the explanation of the question above. 2009 What is Interim Table – Simple Definition of Interim Table The interim table is a table that is generated by joining two tables and not the final result table. In other words, when two tables are joined they create an interim table as resultset but the resultset is not final yet. It may be possible that more tables are about to join on the interim table, and more operations are still to be applied on that table (e.g. Order By, Having etc). Besides, it may be possible that there is no interim table; sometimes final table is what is generated when the query is run. 2010 Stored Procedure and Transactions If Stored Procedure is transactional then, it should roll back complete transactions when it encounters any errors. Well, that does not happen in this case, which proves that Stored Procedure does not only provide just the transactional feature to a batch of T-SQL. Generate Database Script for SQL Azure When talking about SQL Azure the most common complaint I hear is that the script generated from stand-along SQL Server database is not compatible with SQL Azure. This was true for some time for sure but not any more. If you have SQL Server 2008 R2 installed you can follow the guideline below to generate a script which is compatible with SQL Azure. Convert IN to EXISTS – Performance Talk It is NOT necessary that every time when IN is replaced by EXISTS it gives better performance. However, in our case listed above it does for sure give better performance. You can read about this subject in the associated blog post. Subquery or Join – Various Options – SQL Server Engine Knows the Best Every single time whenever there is a performance tuning exercise, I hear the conversation from developer where some prefer subquery and some prefer join. In this two part blog post, I explain the same in the detail with examples. Part 1 | Part 2 Merge Operations – Insert, Update, Delete in Single Execution MERGE is a new feature that provides an efficient way to do multiple DML operations. In earlier versions of SQL Server, we had to write separate statements to INSERT, UPDATE, or DELETE data based on certain conditions; however, at present, by using the MERGE statement, we can include the logic of such data changes in one statement that even checks when the data is matched and then just update it, and similarly, when the data is unmatched, it is inserted. 2011 Puzzle – Statistics are not updated but are Created Once Here is the quick scenario about my setup. Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated – WHY? Question to You – When to use Function and When to use Stored Procedure Personally, I believe that they are both different things - they cannot be compared. I can say, it will be like comparing apples and oranges. Each has its own unique use. However, they can be used interchangeably at many times and in real life (i.e., production environment). I have personally seen both of these being used interchangeably many times. This is the precise reason for asking this question. 2012 In year 2012 I had two interesting series ran on the blog. If there is no fun in learning, the learning becomes a burden. For the same reason, I had decided to build a three part quiz around SEQUENCE. The quiz was to identify the next value of the sequence. I encourage all of you to take part in this fun quiz. Guess the Next Value – Puzzle 1 Guess the Next Value – Puzzle 2 Guess the Next Value – Puzzle 3 Guess the Next Value – Puzzle 4 Simple Example to Configure Resource Governor – Introduction to Resource Governor Resource Governor is a feature which can manage SQL Server Workload and System Resource Consumption. We can limit the amount of CPU and memory consumption by limiting /governing /throttling on the SQL Server. If there are different workloads running on SQL Server and each of the workload needs different resources or when workloads are competing for resources with each other and affecting the performance of the whole server resource governor is a very important task. Tricks to Replace SELECT * with Column Names – SQL in Sixty Seconds #017 – Video  Retrieves unnecessary columns and increases network traffic When a new columns are added views needs to be refreshed manually Leads to usage of sub-optimal execution plan Uses clustered index in most of the cases instead of using optimal index It is difficult to debug SQL SERVER – Load Generator – Free Tool From CodePlex The best part of this SQL Server Load Generator is that users can run multiple simultaneous queries again SQL Server using different login account and different application name. The interface of the tool is extremely easy to use and very intuitive as well. A Puzzle – Swap Value of Column Without Case Statement Let us assume there is a single column in the table called Gender. The challenge is to write a single update statement which will flip or swap the value in the column. For example if the value in the gender column is ‘male’ swap it with ‘female’ and if the value is ‘female’ swap it with ‘male’. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Tile Collision & Sliding against tiles

    - by Devin Rawlek
    I have a tile based map with a top down camera. My sprite stops moving when he collides with a wall in any of the four directions however I am trying to get the sprite to slide along the wall if more than one directional key is pressed after being stopped. Tiles are set to 32 x 32. Here is my code; // Gets Tile Player Is Standing On var splatterTileX = (int)player.Position.X / Engine.TileWidth; var splatterTileY = (int)player.Position.Y / Engine.TileHeight; // Foreach Layer In World Splatter Map Layers foreach (var layer in WorldSplatterTileMapLayers) { // If Sprite Is Not On Any Edges if (splatterTileX < layer.Width - 1 && splatterTileX > 0 && splatterTileY < layer.Height - 1 && splatterTileY > 0) { tileN = layer.GetTile(splatterTileX, splatterTileY - 1); // North tileNE = layer.GetTile(splatterTileX + 1, splatterTileY - 1); // North-East tileE = layer.GetTile(splatterTileX + 1, splatterTileY); // East tileSE = layer.GetTile(splatterTileX + 1, splatterTileY + 1); // South-East tileS = layer.GetTile(splatterTileX, splatterTileY + 1); // South tileSW = layer.GetTile(splatterTileX - 1, splatterTileY + 1); // South-West tileW = layer.GetTile(splatterTileX - 1, splatterTileY); // West tileNW = layer.GetTile(splatterTileX - 1, splatterTileY - 1); // North-West } // If Sprite Is Not On Any X Edges And Is On -Y Edge if (splatterTileX < layer.Width - 1 && splatterTileX > 0 && splatterTileY == 0) { tileE = layer.GetTile(splatterTileX + 1, splatterTileY); // East tileSE = layer.GetTile(splatterTileX + 1, splatterTileY + 1); // South-East tileS = layer.GetTile(splatterTileX, splatterTileY + 1); // South tileSW = layer.GetTile(splatterTileX - 1, splatterTileY + 1); // South-West tileW = layer.GetTile(splatterTileX - 1, splatterTileY); // West } // If Sprite Is On +X And -Y Edges if (splatterTileX == layer.Width - 1 && splatterTileY == 0) { tileS = layer.GetTile(splatterTileX, splatterTileY + 1); // South tileSW = layer.GetTile(splatterTileX - 1, splatterTileY + 1); // South-West tileW = layer.GetTile(splatterTileX - 1, splatterTileY); // West } // If Sprite Is On +X Edge And Y Is Not On Any Edge if (splatterTileX == layer.Width - 1 && splatterTileY < layer.Height - 1 && splatterTileY > 0) { tileS = layer.GetTile(splatterTileX, splatterTileY + 1); // South tileSW = layer.GetTile(splatterTileX - 1, splatterTileY + 1); // South-West tileW = layer.GetTile(splatterTileX - 1, splatterTileY); // West tileNW = layer.GetTile(splatterTileX - 1, splatterTileY - 1); // North-West tileN = layer.GetTile(splatterTileX, splatterTileY - 1); // North } // If Sprite Is On +X And +Y Edges if (splatterTileX == layer.Width - 1 && splatterTileY == layer.Height - 1) { tileW = layer.GetTile(splatterTileX - 1, splatterTileY); // West tileNW = layer.GetTile(splatterTileX - 1, splatterTileY - 1); // North-West tileN = layer.GetTile(splatterTileX, splatterTileY - 1); // North } // If Sprite Is Not On Any X Edges And Is On +Y Edge if (splatterTileX < (layer.Width - 1) && splatterTileX > 0 && splatterTileY == layer.Height - 1) { tileW = layer.GetTile(splatterTileX - 1, splatterTileY); // West tileNW = layer.GetTile(splatterTileX - 1, splatterTileY - 1); // North-West tileN = layer.GetTile(splatterTileX, splatterTileY - 1); // North tileNE = layer.GetTile(splatterTileX + 1, splatterTileY - 1); // North-East tileE = layer.GetTile(splatterTileX + 1, splatterTileY); // East } // If Sprite Is On -X And +Y Edges if (splatterTileX == 0 && splatterTileY == layer.Height - 1) { tileN = layer.GetTile(splatterTileX, splatterTileY - 1); // North tileNE = layer.GetTile(splatterTileX + 1, splatterTileY - 1); // North-East tileE = layer.GetTile(splatterTileX + 1, splatterTileY); // East } // If Sprite Is On -X Edge And Y Is Not On Any Edges if (splatterTileX == 0 && splatterTileY < (layer.Height - 1) && splatterTileY > 0) { tileN = layer.GetTile(splatterTileX, splatterTileY - 1); // North tileNE = layer.GetTile(splatterTileX + 1, splatterTileY - 1); // North-East tileE = layer.GetTile(splatterTileX + 1, splatterTileY); // East tileSE = layer.GetTile(splatterTileX + 1, splatterTileY + 1); // South-East tileS = layer.GetTile(splatterTileX, splatterTileY + 1); // South } // If Sprite Is In The Top Left Corner if (splatterTileX == 0 && splatterTileY == 0) { tileE = layer.GetTile(splatterTileX + 1, splatterTileY); // East tileSE = layer.GetTile(splatterTileX + 1, splatterTileY + 1); // South-East tileS = layer.GetTile(splatterTileX, splatterTileY + 1); // South } // Creates A New Rectangle For TileN tileN.TileRectangle = new Rectangle(splatterTileX * Engine.TileWidth, (splatterTileY - 1) * Engine.TileHeight, Engine.TileWidth, Engine.TileHeight); // Tile Collision Detection Between Player Rectangle And N Tile var tileNCollision = player.Rectangle.Intersects(tileN.TileRectangle); // Creates A New Rectangle For TileNE tileNE.TileRectangle = new Rectangle((splatterTileX + 1) * Engine.TileWidth, (splatterTileY - 1) * Engine.TileHeight, Engine.TileWidth, Engine.TileHeight); // Tile Collision Detection Between Player Rectangle And NE Tile var tileNECollision = player.Rectangle.Intersects(tileNE.TileRectangle); // Creates A New Rectangle For TileE tileE.TileRectangle = new Rectangle((splatterTileX + 1) * Engine.TileWidth, splatterTileY * Engine.TileHeight, Engine.TileWidth, Engine.TileHeight); // Tile Collision Detection Between Player Rectangle And E Tile var tileECollision = player.Rectangle.Intersects(tileE.TileRectangle); // Creates A New Rectangle For TileSE tileSE.TileRectangle = new Rectangle((splatterTileX + 1) * Engine.TileWidth, (splatterTileY + 1) * Engine.TileHeight, Engine.TileWidth, Engine.TileHeight); // Tile Collision Detection Between Player Rectangle And SE Tile var tileSECollision = player.Rectangle.Intersects(tileSE.TileRectangle); // Creates A New Rectangle For TileS tileS.TileRectangle = new Rectangle(splatterTileX * Engine.TileWidth, (splatterTileY + 1) * Engine.TileHeight, Engine.TileWidth, Engine.TileHeight); // Tile Collision Detection Between Player Rectangle And S Tile var tileSCollision = player.Rectangle.Intersects(tileS.TileRectangle); // Creates A New Rectangle For TileSW tileSW.TileRectangle = new Rectangle((splatterTileX - 1) * Engine.TileWidth, (splatterTileY + 1) * Engine.TileHeight, Engine.TileWidth, Engine.TileHeight); // Tile Collision Detection Between Player Rectangle And SW Tile var tileSWCollision = player.Rectangle.Intersects(tileSW.TileRectangle); // Creates A New Rectangle For TileW tileW.TileRectangle = new Rectangle((splatterTileX - 1) * Engine.TileWidth, splatterTileY * Engine.TileHeight, Engine.TileWidth, Engine.TileHeight); // Tile Collision Detection Between Player Rectangle And Current Tile var tileWCollision = player.Rectangle.Intersects(tileW.TileRectangle); // Creates A New Rectangle For TileNW tileNW.TileRectangle = new Rectangle((splatterTileX - 1) * Engine.TileWidth, (splatterTileY - 1) * Engine.TileHeight, Engine.TileWidth, Engine.TileHeight); // Tile Collision Detection Between Player Rectangle And Current Tile var tileNWCollision = player.Rectangle.Intersects(tileNW.TileRectangle); // Allow Sprite To Occupy More Than One Tile if (tileNCollision && tileN.TileBlocked == false) { tileN.TileOccupied = true; } if (tileECollision && tileE.TileBlocked == false) { tileE.TileOccupied = true; } if (tileSCollision && tileS.TileBlocked == false) { tileS.TileOccupied = true; } if (tileWCollision && tileW.TileBlocked == false) { tileW.TileOccupied = true; } // Player Up if (keyState.IsKeyDown(Keys.W) || (gamePadOneState.DPad.Up == ButtonState.Pressed)) { player.CurrentAnimation = AnimationKey.Up; if (tileN.TileOccupied == false) { if (tileNWCollision && tileNW.TileBlocked || tileNCollision && tileN.TileBlocked || tileNECollision && tileNE.TileBlocked) { playerMotion.Y = 0; } else playerMotion.Y = -1; } else if (tileN.TileOccupied) { if (tileNWCollision && tileNW.TileBlocked || tileNECollision && tileNE.TileBlocked) { playerMotion.Y = 0; } else playerMotion.Y = -1; } } // Player Down if (keyState.IsKeyDown(Keys.S) || (gamePadOneState.DPad.Down == ButtonState.Pressed)) { player.CurrentAnimation = AnimationKey.Down; // Check Collision With Tiles if (tileS.TileOccupied == false) { if (tileSWCollision && tileSW.TileBlocked || tileSCollision && tileS.TileBlocked || tileSECollision && tileSE.TileBlocked) { playerMotion.Y = 0; } else playerMotion.Y = 1; } else if (tileS.TileOccupied) { if (tileSWCollision && tileSW.TileBlocked || tileSECollision && tileSE.TileBlocked) { playerMotion.Y = 0; } else playerMotion.Y = 1; } } // Player Left if (keyState.IsKeyDown(Keys.A) || (gamePadOneState.DPad.Left == ButtonState.Pressed)) { player.CurrentAnimation = AnimationKey.Left; if (tileW.TileOccupied == false) { if (tileNWCollision && tileNW.TileBlocked || tileWCollision && tileW.TileBlocked || tileSWCollision && tileSW.TileBlocked) { playerMotion.X = 0; } else playerMotion.X = -1; } else if (tileW.TileOccupied) { if (tileNWCollision && tileNW.TileBlocked || tileSWCollision && tileSW.TileBlocked) { playerMotion.X = 0; } else playerMotion.X = -1; } } // Player Right if (keyState.IsKeyDown(Keys.D) || (gamePadOneState.DPad.Right == ButtonState.Pressed)) { player.CurrentAnimation = AnimationKey.Right; if (tileE.TileOccupied == false) { if (tileNECollision && tileNE.TileBlocked || tileECollision && tileE.TileBlocked || tileSECollision && tileSE.TileBlocked) { playerMotion.X = 0; } else playerMotion.X = 1; } else if (tileE.TileOccupied) { if (tileNECollision && tileNE.TileBlocked || tileSECollision && tileSE.TileBlocked) { playerMotion.X = 0; } else playerMotion.X = 1; } } I have my tile detection setup so the 8 tiles around the sprite are the only ones detected. The collision variable is true if the sprites rectangle intersects with one of the detected tiles. The sprites origin is centered at 16, 16 on the image so whenever this point goes over to the next tile it calls the surrounding tiles. I am trying to have collision detection like in the game Secret of Mana. If I remove the diagonal checks the sprite will pass through thoses tiles because whichever tile the sprites origin is on will be the detection center. So if the sprite is near the edge of the tile and then goes up it looks like half the sprite is walking through the wall. Is there a way for the detection to occur for each tile the sprite's rectangle touches?

    Read the article

< Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >