Search Results

Search found 17345 results on 694 pages for 'next'.

Page 124/694 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • Behavior Driven Development (BDD) and DevExpress XAF

    - by Patrick Liekhus
    So in my previous posts I showed you how I used EDMX to quickly build my business objects within XPO and XAF.  But how do you test whether your business objects are actually doing what you want and verify that your business logic is correct?  Well I was reading my monthly MSDN magazine last last year and came across an article about using SpecFlow and WatiN to build BDD tests.  So why not use these same techniques to write SpecFlow style scripts and have them generate EasyTest scripts for use with XAF.  Let me outline and show a few things below.  I plan on releasing this code in a short while, I just wanted to preview what I was thinking. Before we begin… First, if you have not read the article in MSDN, here is the link to the article that I found my inspiration.  It covers the overview of BDD vs. TDD, how to write some of the SpecFlow syntax and how use the “Steps” logic to create your own tests. Second, if you have not heard of EasyTest from DevExpress I strongly recommend you review it here.  It basically takes the power of XAF and the beauty of your application and allows you to create text based files to execute automated commands within your application. Why would we do this?  Because as you will see below, the cucumber syntax is easier for business analysts to interpret and digest the business rules from.  You can find most of the information you will need on Cucumber syntax within The Secret Ninja Cucumber Scrolls located here.  The basics of the syntax are that Given X When Y Then Z.  For example, Given I am at the login screen When I enter my login credentials Then I expect to see the home screen.  Pretty easy syntax to follow. Finally, we will need to download and install SpecFlow.  You can find it on their website here.  Once you have this installed then let’s write our first test. Let’s get started… So where to start.  Create a new testing project within your solution.  I typically call this with a similar naming convention as used by XAF, my project name .FunctionalTests (i.e.  AlbumManager.FunctionalTests).  Remove the basic test that is created for you.  We will not use the default test but rather create our own SpecFlow “Feature” files.  Add a new item to your project and select the SpecFlow Feature file under C#.  Name your feature file as you do your class files after the test they are performing. Now you can crack open your new feature file and write the actual test.  Make sure to have your Ninja Scrolls from above as it provides valuable resources on how to write your test syntax.  In this test below you can see how I defined the documentation in the Feature section.  This is strictly for our purposes of readability and do not effect the test.  The next section is the Scenario Outline which is considered a test template.  You can see the brackets <> around the fields that will be filled in for each test.  So in the example below you can see that Given I am starting a new test and the application is open.  This means I want a new EasyTest file and the windows application generated by XAF is open.  Next When I am at the Albums screen tells XAF to navigate to the Albums list view.  And I click the New:Album button, tells XAF to click the new button on the list grid.  And I enter the following information tells XAF which fields to complete with the mapped values.  And I click the Save and Close button causes the record to be saved and the detail form to be closed.  Then I verify results tests the input data against what is visible in the grid to ensure that your record was created. The Scenarios section gives each test a unique name and then fills in the values for each test.  This way you can use the same test to make multiple passes with different data. Almost there.  Now we must save the feature file and the BDD tests will be written using standard unit test syntax.  This is all handled for you by SpecFlow so just save the file.  What you will see in your Test List Editor is a unit test for each of the above scenarios you just built. You can now use standard unit testing frameworks to execute the test as you desire.  As you would expect then, these BDD SpecFlow tests can be automated into your build process to ensure that your business requirements are satisfied each and every time. How does it work? What we have done is to intercept the testing logic at runtime to interpret the SpecFlow syntax into EasyTest syntax.  This is the basic StepDefinitions that we are working on now.  We expect to put these on CodePlex within the next few days.  You can always override and make your own rules as you see fit for your project.  Follow the MSDN magazine above to start your own.  You can see part of our implementation below. As you can gather from the MSDN article and the code sample below, we have created our own common rules to build the above syntax. The code implementation for these rules basically saves your information from the feature file into an EasyTest file format.  It then executes the EasyTest file and parses the XML results of the test.  If the test succeeds the test is passed.  If the test fails, the EasyTest failure message is logged and the screen shot (as captured by EasyTest) is saved for your review. Again we are working on getting this code ready for mass consumption, but at this time it is not ready.  We will post another message when it is ready with all details about usage and setup. Thanks

    Read the article

  • How can I remove the Unity Launcher?

    - by Magnus Hoff
    Unfortunately, the Unity Launcher on the left hand side of the screen takes more valuable space away than the new menu bar gives. Is there any way to get rid of the Launcher? Alternatives I would be satisfied with include: Not having the Launcher at all Having the Launcher hide automatically Having applications open on top of the Launcher (not next to it) (edit:) Note that I am specifically looking for a way to keep the global menubar, while getting rid of the Launcher.

    Read the article

  • Growing Up with Samba

    Next month Samba eXPerience 2010 , the ninth international Samba conference for users and developers, will be held in G?ttingen, Germany from May 3rd - 7th. Jeremy Allison...

    Read the article

  • Why Is Vertical Resolution Monitor Resolution so Often a Multiple of 360?

    - by Jason Fitzpatrick
    Stare at a list of monitor resolutions long enough and you might notice a pattern: many of the vertical resolutions, especially those of gaming or multimedia displays, are multiples of 360 (720, 1080, 1440, etc.) But why exactly is this the case? Is it arbitrary or is there something more at work? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Trojandestroy recently noticed something about his display interface and needs answers: YouTube recently added 1440p functionality, and for the first time I realized that all (most?) vertical resolutions are multiples of 360. Is this just because the smallest common resolution is 480×360, and it’s convenient to use multiples? (Not doubting that multiples are convenient.) And/or was that the first viewable/conveniently sized resolution, so hardware (TVs, monitors, etc) grew with 360 in mind? Taking it further, why not have a square resolution? Or something else unusual? (Assuming it’s usual enough that it’s viewable). Is it merely a pleasing-the-eye situation? So why have the display be a multiple of 360? The Answer SuperUser contributor User26129 offers us not just an answer as to why the numerical pattern exists but a history of screen design in the process: Alright, there are a couple of questions and a lot of factors here. Resolutions are a really interesting field of psychooptics meeting marketing. First of all, why are the vertical resolutions on youtube multiples of 360. This is of course just arbitrary, there is no real reason this is the case. The reason is that resolution here is not the limiting factor for Youtube videos – bandwidth is. Youtube has to re-encode every video that is uploaded a couple of times, and tries to use as little re-encoding formats/bitrates/resolutions as possible to cover all the different use cases. For low-res mobile devices they have 360×240, for higher res mobile there’s 480p, and for the computer crowd there is 360p for 2xISDN/multiuser landlines, 720p for DSL and 1080p for higher speed internet. For a while there were some other codecs than h.264, but these are slowly being phased out with h.264 having essentially ‘won’ the format war and all computers being outfitted with hardware codecs for this. Now, there is some interesting psychooptics going on as well. As I said: resolution isn’t everything. 720p with really strong compression can and will look worse than 240p at a very high bitrate. But on the other side of the spectrum: throwing more bits at a certain resolution doesn’t magically make it better beyond some point. There is an optimum here, which of course depends on both resolution and codec. In general: the optimal bitrate is actually proportional to the resolution. So the next question is: what kind of resolution steps make sense? Apparently, people need about a 2x increase in resolution to really see (and prefer) a marked difference. Anything less than that and many people will simply not bother with the higher bitrates, they’d rather use their bandwidth for other stuff. This has been researched quite a long time ago and is the big reason why we went from 720×576 (415kpix) to 1280×720 (922kpix), and then again from 1280×720 to 1920×1080 (2MP). Stuff in between is not a viable optimization target. And again, 1440P is about 3.7MP, another ~2x increase over HD. You will see a difference there. 4K is the next step after that. Next up is that magical number of 360 vertical pixels. Actually, the magic number is 120 or 128. All resolutions are some kind of multiple of 120 pixels nowadays, back in the day they used to be multiples of 128. This is something that just grew out of LCD panel industry. LCD panels use what are called line drivers, little chips that sit on the sides of your LCD screen that control how bright each subpixel is. Because historically, for reasons I don’t really know for sure, probably memory constraints, these multiple-of-128 or multiple-of-120 resolutions already existed, the industry standard line drivers became drivers with 360 line outputs (1 per subpixel). If you would tear down your 1920×1080 screen, I would be putting money on there being 16 line drivers on the top/bottom and 9 on one of the sides. Oh hey, that’s 16:9. Guess how obvious that resolution choice was back when 16:9 was ‘invented’. Then there’s the issue of aspect ratio. This is really a completely different field of psychology, but it boils down to: historically, people have believed and measured that we have a sort of wide-screen view of the world. Naturally, people believed that the most natural representation of data on a screen would be in a wide-screen view, and this is where the great anamorphic revolution of the ’60s came from when films were shot in ever wider aspect ratios. Since then, this kind of knowledge has been refined and mostly debunked. Yes, we do have a wide-angle view, but the area where we can actually see sharply – the center of our vision – is fairly round. Slightly elliptical and squashed, but not really more than about 4:3 or 3:2. So for detailed viewing, for instance for reading text on a screen, you can utilize most of your detail vision by employing an almost-square screen, a bit like the screens up to the mid-2000s. However, again this is not how marketing took it. Computers in ye olden days were used mostly for productivity and detailed work, but as they commoditized and as the computer as media consumption device evolved, people didn’t necessarily use their computer for work most of the time. They used it to watch media content: movies, television series and photos. And for that kind of viewing, you get the most ‘immersion factor’ if the screen fills as much of your vision (including your peripheral vision) as possible. Which means widescreen. But there’s more marketing still. When detail work was still an important factor, people cared about resolution. As many pixels as possible on the screen. SGI was selling almost-4K CRTs! The most optimal way to get the maximum amount of pixels out of a glass substrate is to cut it as square as possible. 1:1 or 4:3 screens have the most pixels per diagonal inch. But with displays becoming more consumery, inch-size became more important, not amount of pixels. And this is a completely different optimization target. To get the most diagonal inches out of a substrate, you want to make the screen as wide as possible. First we got 16:10, then 16:9 and there have been moderately successful panel manufacturers making 22:9 and 2:1 screens (like Philips). Even though pixel density and absolute resolution went down for a couple of years, inch-sizes went up and that’s what sold. Why buy a 19″ 1280×1024 when you can buy a 21″ 1366×768? Eh… I think that about covers all the major aspects here. There’s more of course; bandwidth limits of HDMI, DVI, DP and of course VGA played a role, and if you go back to the pre-2000s, graphics memory, in-computer bandwdith and simply the limits of commercially available RAMDACs played an important role. But for today’s considerations, this is about all you need to know. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Recording Available: March 2012 Quarterly Customer Update Webcast

    - by John Klinke
    Missed the recent Quarterly Customer Update Webcast? We covered several topics including: * WebCenter 4 Pillars overview * Support Update * WebCenter Content 11gR1 Update * WebCenter Portal 11gR1 Update * Oracle Social Network Overview VIEW WEBCAST RECORDING: Access the March 2012 Webcast recording and presentation by going to: My Oracle Support Site Note: 568127.1 We'll announce the next Quarterly Customer Update Webcast here on the WebCenter Content Alerts blog.

    Read the article

  • Recording Available: March 2012 Quarterly Customer Update Webcast

    - by R.Hunter
    Missed the recent Quarterly Customer Update Webcast? We covered several topics including: * WebCenter 4 Pillars overview * Support Update * WebCenter Content 11gR1 Update * WebCenter Portal 11gR1 Update * Oracle Social Network Overview VIEW WEBCAST RECORDING: Access the March 2012 Webcast recording and presentation by going to: My Oracle Support Site Note: 568127.1 We'll announce the next Quarterly Customer Update Webcast here on the WebCenter Alerts blog.

    Read the article

  • Refresh resources in Windows Phone 7 app

    - by Eugene
    I have a Windows Phone app that relies on an XML data file that comes packaged with the app. In the next version of my app, the XML file will be updated, how do I update the data file once per app update? I thought I could change the file name in the isolated storage, but that would leave trash behind. I could also check for exceptions when I load the XML file, but are there any other, more elegant ways?

    Read the article

  • Saving user details on OnSuspending event for Metro Style Apps

    - by nmarun
    I recently started getting to know about Metro Style Apps on Windows 8. It looks pretty interesting so far and VS2011 definitely helps making it easier to learn and create Metro Style Apps. One of the features available for developers is the ability to save user data so it can be retrieved the next time the app is run after being closed by the user or even launched from back suspended state. Here’s a little history on this whole ‘suspended’ state of a Metro Style app: Once the user say, ‘alt+tab...(read more)

    Read the article

  • More Maintenance Plan Weirdness

    - by AjarnMark
    I’m not a big fan of the built-in Maintenance Plan functionality in SQL Server.  I like the interface in SQL 2005 better than 2000 (it looks more like building an SSIS package) but it’s still a bit of a black box.  You don’t really know what commands are being run based on the selections you have made, and you can easily make some unwise choices without realizing it, such as shrinking your database on a regular basis.  I really prefer to know exactly what commands and with which options are being run on my servers. Recently I had another very strange thing happen with a Maintenance Plan, this time in SQL 2005, SP3.  I inherited this server and have done a bit of cleanup on it, but had not yet gotten around to replacing the Maintenance Plans with all my own scripts.  However, one of the maintenance plans which was just responsible for doing LOG backups was running more frequently than that system needed, and I thought I would just tweak the schedule a bit.  So I opened the Maintenance Plan and edited the properties of the Subplan, setting a new schedule, saved it and figured all was good to go.  But the next execution of the Scheduled Job that triggers the Maintenance Plan code failed with an error about the Owner of the job.  Specifically the error was, “Unable to determine if the owner (OldDomain\OldDBAUserID) of job MaintenancePlanName.Subplan has server access (reason: Could not obtain information about Windows NT group/user 'OldDomain\OldDBAUserID’..”  I was really confused because I had previously updated all of the jobs to have current accounts as the owners.  At first I thought it was just a fluke, but it happened on the next scheduled cycle so I investigated further and sure enough, that job had the old DBA’s account listed as the owner.  I fixed it and the job successfully ran to completion. Now, I don’t really like mysteries like that, so I did some more testing and verified that, sure enough, just editing the Subplan schedule and saving the Maintenance Job caused the Scheduled Job to be recreated with the old credentials.  I don’t know where it is getting those credentials, but I can only assume that it is the same as the original creator of the Maintenance Plan, and for some reason it insists on using that ID for the job owner.  I looked through the options in SSMA and could not find anything would let me easily set the value that I wanted it to use.  I suspect that if I did something like executing sp_changeobjectowner against the Maintenance Plan that it would use that new ID instead.  I’m sure that there is good reason that it works this way, but rather than mess around with it much more, I’m just going to spend my time rolling out my replacement scripts instead. Chalk this little hidden oddity up as yet one more reason I’m not a fan of Maintenance Plans.

    Read the article

  • Sabre Manages Fast Data Growth with Oracle Data Integration Products

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Last year at OpenWorld we announced Sabre Holding as a winner of the Fusion Middleware Innovation Awards. The Sabre team did an excellent job at leveraging cutting edge technologies for managing rapid data growth and exponential scalability demands they have experienced in the travel industry. Today we announced the details and specific benefits of Sabre’s new real-time data integration solution in a press release. Please take a look if you haven’t seen it yet. Sabre Holdings Deploys Oracle Data Integrator and Oracle GoldenGate to Support Rapid Customer Growth There are 3 different areas of benefits Sabre achieved by using Oracle Data Integration products: Manages 7X increase in data sources for the enterprise data warehouse Reduced infrastructure complexity Decreased time to market for new products and services by 30 percent. This simply shows that using latest technologies helps the companies to innovate robust solutions against today’s key data management challenges. And the benefit of using a next generation data integration technology is not only seen in the IT operations, but also in the business side. A better data integration solution for the enterprise data warehouse delivered the platform they need to accelerate how they service their customers, improving their competitive advantage. Tomorrow I will give another great example of innovation with next generation data integration from Oracle. We will be discussing the Fusion Middleware Innovation Awards 2012 winners and their results with using Oracle’s data integration products.

    Read the article

  • Redgate ANTS Performance Profiler

    - by Jon Canning
    Seemingly forever I've been working on a business idea, it's a REST API delivering content to mobiles, and I've never really had much idea about its performance. Yes, I have a suite of unit tests and integration tests, but these only tell me that it works, not how well it works. I was also about to embark on a major refactor, swapping the database from MongoDB to RavenDB, and was curious to see if that impacted performance at all, so I needed a profiler that supported IIS Express that I can run my integration tests against, and Google gave me:   http://www.red-gate.com/supportcenter/content/ANTS_Performance_Profiler/help/7.4/app_iise   Excellent. Following the above guide an instance of IIS Express and is launched, as is Internet Explorer. The latter eventually becomes annoying, I would like to decide whether I want a browser opened, but thankfully the guide is wrong in that it can be closed and profiling will continue. So I ran my tests, stopped profiling, and was presented with a call tree listing the endpoints called and allowing me to drill down to the source code beneath.     Although useful and fascinating this wasn't what I was expecting to see, I was after the method timings from the entire test suite. Switching Show to Methods Grid presented me with a list of my methods, with the slowest lit up in red at the top. Marvellous.     I did find that if you switch to Methods Grid before Call tree has loaded, you do not get the red warnings.   StructureMap was very busy, and next on the list was a request filter that I didn't expect to be so overworked. Highlighting it, the source code was presented to me in the bottom window with timings and a nice red indicator to show me where to look. Oh horror, that reflection hack I put in months ago, I'd forgotten all about it. It was calling Validate<T>() which in turn was resolving a validator from StructureMap. Note to self, use //TODO: when leaving smelly code lying around.     Before refactoring, remember to Save Profile Results from the File menu. Annoyingly you are not prompted to save your results when exiting, and using Save Project will only leave you thankful that you have version control and can go back in time to run your tests again.   Having implemented StructureMap’s ForGenericType, I ran my tests again and:     Win, thankyou ANTS (What does ANTS stand for BTW?)   There's definitely room in my toolbox for a profiler; what started out as idle curiosity actually solved a potential problem. When presented with a new codebase I can see enormous benefit from getting an overview of the pipeline from the call tree before drilling into the code, and as a sanity check before release it gives a little more reassurance that you've done your best, and shows you exactly where to look if you haven’t.   Next I’m going to profile a load test.

    Read the article

  • Oracle ADF Mobile Video Series: Overview

    - by Michelle Kimihira
    Many of you have asked for some information around Oracle ADF Mobile which announced availability on October 22nd. Today's video is a quick 3 minute overview of what ADFMobile brings to the mobile application developer community.  Next Wednesday's video will show our enthusiastic developers how to create an ADF Mobile application and deploying to a device.   Additional Information Product Information on OTN: ADF Mobile Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Oracle Magazine, March/April 2009

    Oracle Magazine March/April features articles on next-generation data centers, Oracle and midsize businesses, efficient business processes, improving performance and management of Oracle Database 11g, managing Oracle Application Express, Technologist Tom Kyte, Oracle ADF, PL/SQL best practices, the HP Oracle Database Machine, security with Oracle Configuration Manager and much more.

    Read the article

  • Living La Vida LibrePlanet

    The LibrePlanet Conference will be held next week, March 19th-21st, in Cambridge, Massachusetts. The Google Open Source Programs Office's Leslie Hawthorn will be participating in the lively discussions...

    Read the article

  • Upgrade Workshop in Belgrade/Serbia

    - by Mike Dietrich
    Thanks to all of you attending today at the Database Upgrade Workshop in Belgrade - hope you enjoyed it :-) And please find the most current version of the slides here: http://apex.oracle.com/folien and use the Schluesselwort: upgrade112 See you next time again - thanks and kind regards Mike

    Read the article

  • You're invited : Oracle Solaris Forum, June 19th, Petah Tikva

    - by Frederic Pariente
    The local ISV Engineering will be attending and speaking at the Oracle and ilOUG Solaris Forum next week in Israel. Come meet us there! This free event requires registration, thanks. YOU'RE INVITED Oracle Solaris Forum Date : Tuesday, June 19th, 2012 Time : 14:00 Location :  Dan Academic CenterPetach TikvaIsrael Agenda : Enterprise Manager OPS Center and Oracle Exalogic Elastic CloudSolaris 11NetworkingCustomer Case Study : BMCOpen Systems Curriculum See you there!

    Read the article

  • Puppet: Getting Started On Windows

    - by Robz / Fervent Coder
    Originally posted on: http://geekswithblogs.net/robz/archive/2014/08/07/puppet-getting-started-on-windows.aspxNow that we’ve talked a little about Puppet. Let’s see how easy it is to get started. Install Puppet Let’s get Puppet Installed. There are two ways to do that: With Chocolatey: Open an administrative/elevated command shell and type: choco install puppet Download and install Puppet manually - http://puppetlabs.com/misc/download-options Run Puppet Let’s make pasting into a console window work with Control + V (like it should): choco install wincommandpaste If you have a cmd.exe command shell open, (and chocolatey installed) type: RefreshEnv The previous command will refresh your environment variables, ala Chocolatey v0.9.8.24+. If you were running PowerShell, there isn’t yet a refreshenv for you (one is coming though!). If you have to restart your CLI (command line interface) session or you installed Puppet manually open an administrative/elevated command shell and type: puppet resource user Output should look similar to a few of these: user { 'Administrator': ensure => 'present', comment => 'Built-in account for administering the computer/domain', groups => ['Administrators'], uid => 'S-1-5-21-some-numbers-yo-500', } Let's create a user: puppet apply -e "user {'bobbytables_123': ensure => present, groups => ['Users'], }" Relevant output should look like: Notice: /Stage[main]/Main/User[bobbytables_123]/ensure: created Run the 'puppet resource user' command again. Note the user we created is there! Let’s clean up after ourselves and remove that user we just created: puppet apply -e "user {'bobbytables_123': ensure => absent, }" Relevant output should look like: Notice: /Stage[main]/Main/User[bobbytables_123]/ensure: removed Run the 'puppet resource user' command one last time. Note we just removed a user! Conclusion You just did some configuration management /system administration. Welcome to the new world of awesome! Puppet is super easy to get started with. This is a taste so you can start seeing the power of automation and where you can go with it. We haven’t talked about resources, manifests (scripts), best practices and all of that yet. Next we are going to start to get into more extensive things with Puppet. Next time we’ll walk through getting a Vagrant environment up and running. That way we can do some crazier stuff and when we are done, we can just clean it up quickly.

    Read the article

  • BizTalk To Get Improvements and 2010 Branding

    Microsoft on Tuesday described a few BizTalk Server enhancements to come, as well as a new name for its next-generation enterprise server bus product....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQLServerCentral Best Practices Clinic: Part 4

    In this next installment looking at implementing best practices here at SQLServerCentral, Brad McGehee looks at a few performance counters and what they tell us about the performance of our database server. The Future of SQL Server Monitoring "Being web-based, SQL Monitor enables you to check on your servers from almost any location" Jonathan Allen.Try SQL Monitor now.

    Read the article

  • Stupid simple music player?

    - by itsadok
    Here's what I want: I have a folder with MP3 files. I would like to play all the files in the folder, alphabetically. And I need a way to pause, skip to next file, and that's pretty much it. I don't want to use my music player to organize my music library, I don't need connection to network services, and I don't even need ID3 support. On Windows, WinAmp 2 did what I wanted. What's my best option on Ubuntu?

    Read the article

  • Why does my WIFI connection drop every 5-20 minutes?

    - by Benett Freeman
    There are other questions on this but none have been taken very far in their analysis. I get disconnects very often - sometimes every 20 mins or so, but sometimes even every few minutes. The way I have been getting around it is to disconnect from the Wifi connection and reconnect - and then it works fine again until the next disconnect. I am running 11.04 on an ASUS A52F. Can anyone help me please?

    Read the article

  • OSS App Hackathon @ National Information Society Agency

    - by Edward J. Yoon
    Yesterday, there was a OSS App Hackathon arranged by the NIA (National Information Society Agency) in Seoul. I attended as a panel of judges w/ Prof. Lee of the Next, NHN University. A lot of people were in there. You can read more details (Korean news) here:  - http://news.naver.com/main/read.nhn?mode=LSD&mid=sec&sid1=105&oid=138&aid=0001997038

    Read the article

  • NHibernate Tutorial #5 - Working with Many to Many relationships

    - by BobPalmer
    After a short break last week, I wanted to make sure I made time to publish the next in my series of tutorials on NHibernate. This week I'll be covering Many to Many relationships, the hilo algorithm, IdBag element, and touch on Lazy Loading. You can view the entire article at this link: http://docs.google.com/Doc?docid=0AUP-rKyyUMKhZGczejdxeHZfMjZkdjd3cjJnMg&hl=en As always, feedback and any technical bits I may have missed are always appreciated! -Bob Palmer

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >