Search Results

Search found 11913 results on 477 pages for 'fail fast fail early'.

Page 250/477 | < Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >

  • Implmenting RLE into a tilemap or how to create a large 3D array?

    - by Smallbro
    Currently I've been using a 3D array for my tiles in a 2D world but the 3D side comes in when moving down into caves and whatnot. Now this is not memory efficient and I switched over to a 2D array and can now have much larger maps. The only issue I'm having now is that it seems that my tiles cannot occupy the same space as a tile on the same z level. My current structure means that each block has its own z variable. This is what it used to look like: map.blockData[x][y][z] = new Block(); however now it works like this map.blockData[x][y] = new Block(z); I'm not sure why but if I decide to use the same space on say the floor below it wont allow me to. Does anyone have any ideas on how I can add a z-axis to my 2D array? I'm using java but I reckon the concept carries across different languages. Edit: As Will posted, RLE sounds like the best method for achieving a fast 3D array. However I'm struggling to understand how I would even start to implement it? Would I create a 4D array the 4th being something which controls how many to skip? Or would the x-axis simply change altogether and have large gaps in between - for example [5][y][z] would skip 5 tiles? Is there something really obvious here which I am missing? The number of z levels I'm trying to have is around 66, it would be preferably that I can have up to or more than 1000 in x and y.

    Read the article

  • Use Your Android Phone to Comparison Shop: 4 Scanner Apps Reviewed

    - by Jason Fitzpatrick
    A smart phone in your pocket is great for on the go news, web browsing, and—of course—mobile gaming. It’s also fantastic for comparison shopping. Today we take a look at four Android scanners and price comparison engines. It’s quite a neat time to be a consumer. Historically if you wanted to do serious price comparisons you had to haul yourself around town, gather flyers from the newspapers, and otherwise invest way too much energy into potential savings that might not even break into double digits. Now you can comparison shop with an ease that borders on magic: by simply pulling out your smart phone and scanning the barcode or typing in the name of the item you wish to compare. Today we’re taking a look at some of the more popular and powerful barcode scanners and price comparison engines available for the Android platform. Before we get to that, a word on our methodology. To test the barcode scanners and the resulting search results we wandered around and rounded up some relatively random items from around the How-To Geek offices. This included a children’s graphic novel, a Wii game, a board game, a pack of razors, a box of tea, and a bottle of nail polish. It’s a decent spread of consumer items that covers several genres. For each application we scanned all the items, looked for the best price at the time, and noted any other relevant benefits of using one scanner over another. It’s worth noting that our primary focus was on the speed and ease of use. You may find that certain scanners have specific features that best suit your needs. What we focused on was how fast you could scan, compare prices, and purchase items if you desired. Since all the scanners are free-as-in-beer, feel free to download them all and run your own tests to confirm our conclusions. Use Your Android Phone to Comparison Shop: 4 Scanner Apps Reviewed How to Run Android Apps on Your Desktop the Easy Way HTG Explains: Do You Really Need to Defrag Your PC?

    Read the article

  • PASS Business Intelligence Virtual Chapter Upcoming Sessions (November 2013)

    - by Sergio Govoni
    Let me point out the upcoming live events, dedicated to Business Intelligence with SQL Server, that PASS Business Intelligence Virtual Chapter has scheduled for November 2013. The "Accidental Business Intelligence Project Manager"Date: Thursday 7th November - 8:00 PM GMT / 3:00 PM EST / Noon PSTSpeaker: Jen StirrupURL: https://attendee.gotowebinar.com/register/5018337449405969666 You've watched the Apprentice with Donald Trump and Lord Alan Sugar. You know that the Project Manager is usually the one gets firedYou've heard that Business Intelligence projects are prone to failureYou know that a quick Bing search for "why do Business Intelligence projects fail?" produces a search result of 25 million hits!Despite all this… you're now Business Intelligence Project Manager – now what do you do?In this session, Jen will provide a "sparks from the anvil" series of steps and working practices in Business Intelligence Project Management. What about waterfall vs agile? What is a Gantt chart anyway? Is Microsoft Project your friend or a problematic aspect of being a BI PM? Jen will give you some ideas and insights that will help you set your BI project right: assess priorities, avoid conflict, empower the BI team and generally deliver the Business Intelligence project successfully! Dimensional Modelling Design Patterns: Beyond BasicsDate: Tuesday 12th November - Noon AEDT / 1:00 AM GMT / Monday 11th November 5:00 PM PSTSpeaker: Jason Horner, Josh Fennessy and friendsURL: https://attendee.gotowebinar.com/register/852881628115426561 This session will provide a deeper dive into the art of dimensional modeling. We will look at the different types of fact tables and dimension tables, how and when to use them. We will also some approaches to creating rich hierarchies that make reporting a snap. This session promises to be very interactive and engaging, bring your toughest Dimensional Modeling quandaries. Data Vault Data Warehouse ArchitectureDate: Tuesday 19th November - 4:00 PM PST / 7 PM EST / Wednesday 20th November 11:00 PM AEDTSpeaker: Jeff Renz and Leslie WeedURL: https://attendee.gotowebinar.com/register/1571569707028142849 Data vault is a compelling architecture for an enterprise data warehouse using SQL Server 2012. A well designed data vault data warehouse facilitates fast, efficient and maintainable data integration across business systems. In this session Leslie and I will review the basics about enterprise data warehouse design, introduce you to the data vault architecture and discuss how you can leverage new features of SQL Server 2012 help make your data warehouse solution provide maximum value to your users. 

    Read the article

  • Antenna Aligner Part 8: It's Alive!!!

    - by Chris George
    Finally the day has come, Antenna Aligner v1.0.1 has been uploaded to the AppStore and . "Waiting for review" .. . fast forward 7 days and much checking of emails later WOO HOO! Now what? So I set my facebook page to go live  https://www.facebook.com/AntennaAligner, and started by sending messages to my mates that have iphones! Amazingly a few of them bought it! Similarly some of my colleagues were also kind enough to support me and downloaded it too! Unfortunately the only way I knew they had bought is was from them telling me, as the iTunes connect data is only updated daily at about midday GMT. This is a shame, surely they could provide more granular updates throughout the day? Although I suppose once an app has been out in the wild for a while, daily updates are enough. It would, however, be nice to get a ping when you make your first sale! I would have expected more feedback on my facebook page as well, maybe I'm just expecting too much, or perhaps I've configured the page wrong. The new facebook timeline layout is just confusing, and I'm not sure it's all public, I'll check that! So please take a look and see what you think! I would love to get some more feedback/reviews/suggestions... Oh and watch out for the Android version coming soon!

    Read the article

  • Proof Identify stolen computer getting computer identification info from Launchpad bugs and comparing

    - by Kangarooo
    I sold my old laptop to neighbours and it was stolen from them. Well i think i have found thief so i want to check his computer id and compare it to my old Launchpad bugs id. How in Launchpad i can find from my bugs: Motherboard HDD Somthing else that can help identify it Maybe how to recover or find some overwritten files (couse now there is windows) I found in Launchpad one my bugs has LSPCI autogenerated from bug 682846 https://launchpadlibrarian.net/70611231/Lspci.txt but i dont see any id that can be used to identify specificly my comp. This can be used to identify many same models. Or i missed something in there? And what commands should i use to get all identification on that comp in one go fast? Just lspci? How to get same lspci as it is in that Launchpad link? Now testing laspci on my computer i dont get so much info. Also im now doing a search in my external hdd where i have many backups and maybe i have there result from lspci. So what containing keywords would help doing search with for small lspci and full reports ive done? I might have done sudo lshw somefilename

    Read the article

  • How do I fix a garbled screen on a Gateway LT3103u?

    - by paracaudex
    I've been having garbled screen problems on a Gateway LT3103u on Ubuntu for a while. I just did a fresh install of Ubuntu 11.10 and continue to have issues. I installed xubuntu-desktop in case the issues had to do with the sophisticated GNOME graphics. The problem is less bad, but it's still there. After a few minutes of using XFCE, the screen gets garbled. I assume this has something to do with the graphics card, but I don't know how to go about troubleshooting something like this. Where should I start? Update: Here is the description of the VGA card from lspci -vvv: 01:05.0 VGA compatible controller: ATI Technologies Inc RS690M [Radeon X1200 Series] (prog-if 00 [VGA controller]) Subsystem: Acer Incorporated [ALI] Device 028c Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast TAbort- SERR- [disabled] Capabilities: [50] Power Management version 2 Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-) Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME- Capabilities: [80] MSI: Enable- Count=1/1 Maskable- 64bit+ Address: 0000000000000000 Data: 0000 Kernel driver in use: radeon Kernel modules: radeon Update: Setting GRUB_CMDLINE_LINUX="nomodeset" in /etc/default/grub seems to have fixed it in both Ubuntu and xubuntu-desktop. I will test it for a day or so to see if the problems recur and then post more detail with some links to an explanation. Update 2: It is possible to use this fix for Nvidia card (GTX 260) when graphics is defective after 11.10 upgrade/install? First few restarts was graphic ok, then after few restarts begins suddenly be defective and it stay so. I must returned to 11.04 because this problem and I wait for 12.04. So I hope in this fix.

    Read the article

  • Why is wireless slow with Atheros AR9285?

    - by Luke
    I know there are many posts like this, however none of the fixes I have found have worked. I had the issue on 11.04, and after having no luck fixing it decided to try 12.04 however this has not fixed the problem. I'm using a Lenovo IdeaPad, the network card is a Atheros Communications AR9285. edit add outputs: sudo iwconfig lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"NETGEAR-PLOW" Mode:Managed Frequency:2.437 GHz Access Point: E0:91:F5:7D:1B:BA Bit Rate=65 Mb/s Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:on Link Quality=66/70 Signal level=-44 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:77 Invalid misc:63 Missed beacon:0 eth0 no wireless extensions. lspci -nnk | grep -iA2 net 06:00.0 Network controller [0280]: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) [168c:002b] (rev 01) Subsystem: Lenovo Device [17aa:30a1] Kernel driver in use: ath9k -- 07:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller [10ec:8136] (rev 02) Subsystem: Lenovo Device [17aa:392e] Kernel driver in use: r8169 Thanks

    Read the article

  • Should a stack trace be in the error message presented to the user?

    - by Vilx-
    I've got a bit of an argument at my workplace and I'm trying to figure out who is right, and what is the right thing to do. Context: an intranet web application that our customers use for accounting and other ERP stuff. I'm of the opinion that an error message presented to the user (when things crash) should include as much information as possible, including the stack trace. Of course, it has to start with a nice "An Error has occurred, please submit the below information to the developers" in large, friendly letters. My reasoning is that a screenshot of the crashed application will often be the only easily available source of information. Sure, you can try to get a hold of the client's systems administrator(s), attempt to explain where your log files are, etc, but that will probably be slow and painful (talking to the client representatives mostly is). Also, having an immediate and full information is extremely useful in development, where you don't have to go hunting through the log files to find what you need on every exception. (But that could be solved with a configuration switch.) Unfortunately there has been some kind of "Security audit" (no idea how they did that without the sources... but whatever), and they complained about the full exception messages citing them as a security threat. Naturally, the clients (at least one that I know of) has taken this at face value and now demands that the messages be cleaned. I fail to see how a potential attacker could use a stack trace to figure anything out he couldn't have figured out before. Are there any examples, any documented proof of anyone ever doing that? I think that we should fight this foolish idea, but perhaps I'm the fool here, so... Who's right?

    Read the article

  • Current SPARC Architectures

    - by Darryl Gove
    Different generations of SPARC processors implement different architectures. The architecture that the compiler targets is controlled implicitly by the -xtarget flag and explicitly by the -arch flag. If an application targets a recent architecture, then the compiler gets to play with all the instructions that the new architecture provides. The downside is that the application won't work on older processors that don't have the new instructions. So for developer's there is a trade-off between performance and portability. The way we have solved this in the compiler is to assume a "generic" architecture, and we've made this the default behaviour of the compiler. The only flag that doesn't make this assumption is -fast which tells the compiler to assume that the build machine is also the deployment machine - so the compiler can use all the instructions that the build machine provides. The -xtarget=generic flag tells the compiler explicitly to use this generic model. We work hard on making generic code work well across all processors. So in most cases this is a very good choice. It is also of interest to know what processors support the various architectures. The following Venn diagram attempts to show this: A textual description is as follows: The T1 and T2 processors, in addition to most other SPARC processors that were shipped in the last 10+ years supported V9b, or sparcvis2. The SPARC64 processors from Fujitsu, used in the M-series machines, added support for the floating point multiply accumulate instruction in the sparcfmaf architecture. Support for this instruction also appeared in the T3 - this is called sparcvis3 Later SPARC64 processors added the integer multiply accumulate instruction, this architecture is sparcima. Finally the T4 includes support for both the integer and floating point multiply accumulate instructions in the sparc4 architecture. So the conclusion should be: Floating point multiply accumulate is supported in both the T-series and M-series machines, so it should be a relatively safe bet to start using it. The T4 is a very good machine to deploy to because it supports all the current instruction sets.

    Read the article

  • CodePlex Daily Summary for Thursday, October 17, 2013

    CodePlex Daily Summary for Thursday, October 17, 2013Popular ReleasesSocial Network Importer for NodeXL: SocialNetImporter(v.1.9): This new version includes: - Download latest status update and use it as vertex tooltip - Limit the timelines to parse to me, my friends or both - Fixed some reported bugs about the fan page and group importer - Fixed the login bug reported latelyDotNetNuke® Wiki: 05.00.00: Changes made to better support upgrades and the removal of deprecated legacy files that were causing formatting issues. Updated the Version number to better indicate the significance of the C# migration and the new DNN 7.0.2 minimum requirement.TerrariViewer: TerrariViewer v7.1 [Terraria Inventory Editor]: You can now backspace in number fields Items added in 1.2.0.3 no longer corrupt player files Buff durations capped at 9999999 Item stacks capped at 9999999 Version info added Prefix IDs corrected Shoe and Eye color box are now properly clickable Moved Bank and Safe into their own tab Users will now be notified of new updatesPython Tools for Visual Studio: 2.0: PTVS 2.0 We’re pleased to announce the release of Python Tools for Visual Studio 2.0 RTM. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, IPython, and cross platform and cross language debugging support. QUICK VIDEO OVERVIEW For a quick overview of the general IDE experience, please watch this v...C# Intellisense for Notepad++: Release v.1.0.8.2: Solved scrolling problem after DocumentFormatting Implemented "format as you type" --- To avoid the DLLs getting locked by OS use MSI file for the installation.CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.8.2: Solved scrolling problem after DocumentFormatting Implemented "format as you type" --- To avoid the DLLs getting locked by OS use MSI file for the installation.Collection Commander for Configuration Manager 2012: CMCollCtr 1.0.0: Change log: - MSI Setup - UI Improved - CM12 Console integration - New Powershell code snippets - Client Center IntegrationLINQ to Twitter: LINQ to Twitter v2.1.09: Supports .NET 3.5, .NET 4.0, .NET 4.5, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, Windows 8, and Windows Azure. 100% Twitter API coverage. Also supports Twitter API v1.1! Also on NuGet.Sandcastle Help File Builder: SHFB v1.9.8.0 with Visual Studio Package: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This new release contains bug fixes and feature enhancements. There are some potential breaking changes in this release as some features of the Help File Builder have been moved into...C++ REST SDK (codename "Casablanca"): C++ REST SDK 1.3.0: This release fixes multiple customer reported issues as well as the following: Full support for Dev12 binaries and project files Full support for Windows XP New sample highlighting the Client and Server APIs : BlackJack Expose underlying native handle to set custom options on http_client Improvements to Listener Library Note: Dev10 binaries have been dropped as of this release, however the Dev10 project files are still available in the Source CodeAD ACL Scanner: 1.3.2: Minor bug fixed: Powershell 4.0 will report: Select—Object: Parameter cannot be processed because the parameter name p is ambiguous.Json.NET: Json.NET 5.0 Release 7: New feature - Added support for Immutable Collections New feature - Added WriteData and ReadData settings to DataExtensionAttribute New feature - Added reference and type name handling support to extension data New feature - Added default value and required support to constructor deserialization Change - Extension data is now written when serializing Fix - Added missing casts to JToken Fix - Fixed parsing large floating point numbers Fix - Fixed not parsing some ISO date ...Fast YouTube Downloader: YouTube Downloader 2.2.0: YouTube Downloader 2.2.0VidCoder: 1.5.8 Beta: Added hardware acceleration options: Bicubic OpenCL scaling algorithm, QSV decoding/encoding and DXVA decoding. Updated HandBrake core to SVN 5834. Updated VidCoder setup icon. Fixed crash when choosing the mp4v2 container on x86 and opening on x64. Warning: the hardware acceleration features require specific hardware or file types to work correctly: QSV: Need an Intel processor that supports Quick Sync Video encoding, with a monitor hooked up to the Intel HD Graphics output and the lat...ASP.net MVC Awesome - jQuery Ajax Helpers: 3.5.2: version 3.5.2 - fix for setting single value to multivalue controls - datepicker min max date offset fix - html encoding for keys fix - enable Column.ClientFormatFunc to be a function call that will return a function version 3.5.1 - fixed html attributes rendering - fixed loading animation rendering - css improvements version 3.5 ========================== - autosize for all popups ( can be turned off by calling in js awe.autoSize = false ) - added Parent, Paremeter extensions ...Wsus Package Publisher: Release v1.3.1310.12: Allow the Update Creation Wizard to be set in full screen mode. Fix a bug which prevent WPP to Reset Remote Sus Client ID. Change the behavior of links in the Update Detail Viewer. Left-Click to open, Right-Click to copy to the Clipboard.WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: WDTVHubGen.v2.1.6.maint: I think this covers all of the issues. new additions: fixed the thumbnail problem for backgrounds. general clean up and error checking. need to get this put through the wringer and all feedback is welcome.BIDS Helper: BIDS Helper 1.6.4: This BIDS Helper release brings the following new features and fixes: New Features: A new Bus Matrix style report option when you run the Printer Friendly Dimension Usage report for an SSAS cube. The Biml engine is now fully in sync with the supported subset of Varigence Mist 3.4. This includes a large number of language enhancements, bugfixes, and project deployment support. Fixed Issues: Fixed Biml execution for project connections fixing a bug with Tabular Translations Editor not a...Free language translator and file converter: Free Language Translator 3.4: fixes for new version look up.PowerShell App Deployment Toolkit: PowerShell App Deployment Toolkit v3.0.6: Added PersistPrompt parameter to Show-InstallationWelcome and Show-InstallationPrompt. Prompt window is persistently returned to center screen after interval specified in config file (default 10 seconds). For Show-InstallationWelcome, this only takes effect if deferral is not available to user. The user will have no option but to respond to the prompt - resistance is futile! Added example advanced Office 2010 deployment script Asynchronous actions now write to the same log file as synchro...New ProjectsAdditionPage: This is a simple ASP.NET in VB.NET page that allows users to enter 2 numbers, and display their sum. Arad Enterprise Messaging Proxy: In these situations, the overhead and configuration complexity of an external webserver is seldom worth the trouble. AEMGP Server Implementation base on Socke.Boring Sudoku: Boring Sudoku is Sudoku game, made for SFML-game programming tutorial. Feel free to download the source code to learn more about game programming.Configurator Debug: O Configurator Debug é uma ferramenta que auxilia na depuração das queries da feature do Configurador no Dynamics AX 2009.Fast Query: FastQuery - is a tool to execute MS SQL Queries without SQL Server Management Studio (SSMS). gdrwebapp1: Test ProjectK2 workflow Manifestation d'interet with custom IPF methode: K2 SamplesL Language Interpreter: We are now in dev start stage, when none of the functionality is available, but probably you will be able to see it published. this shoud have tex outputLameXP - Audio Encoder Front-End: LameXP is a graphical user-interface for various of audio encoders: It allows you convert audio files from one format to another one in the most simple way.Learn Node.js: Node.js express jade ...localcompare: localhistory add new featureOpen source WPF PDF Viewer: Open source PDF Viewer based on Apitron PDF Rasterizer for .NET component that performs high-quality conversion from PDF file to an image.Pfz.AnimationManagement: A .NET animation library that supports both declarative and imperative animations, capable of creating from simple animation to entire games.SerieSpotter: .net projectSimpleUnitity: ??????? ??? DataBase, TextLog, CacheSPOnlineDevelopTool(SharePoint ??????): SPOnlineDevelopTool?????????SharePoint WebPart?SharePoint WebPartVoya Media: Voya Media is free, open-source and provides one central place to play and organize all your music, pictures and videos.

    Read the article

  • Three Fusion Applications Communities are Now Live

    - by cwarticki
    The Fusion Application Support Team (FAST) launched three communities on the My Oracle Support Community.  These communities provide another channel for customers to get the information about Fusion Applications that they need. The three Fusion Applications communities are: ·     Technical - FA community -- covers all the Fusion Applications technology stack and technical questions from users. ·      Applications and Business Processes community -- covers all the functional questions and issues raised by users for all Fusion Applications except HCM. ·      Fusion Applications HCM community -- covers the functional questions and issues raised by users for Fusion HCM product family. Good for Our Customers Customers participating in these communities can ask questions and get timely responses from Oracle Fusion Applications experts who monitor the communities. The customers can search the Fusion Applications Community contents for information and answers. They also can collaborate with other customers and benefit from the collective experience of the community -- especially from people like you. All customers and partners are invited to join My Oracle Support Community for Fusion Applications. We believe that participating in the Fusion Applications communities can be a win-win option for everyone. We invite you to become an active part of the thriving Fusion Applications communities and experience how this interesting and insightful dialog can benefit you. How to Join the Community Navigate to http://communities.oracle.com. Click the Profile Tab to register yourself and edit your profile. ·         You can subscribe to the Fusion Applications communities by editing your Community Subscriptions. ·         You can get RSS feeds for each of your subscribed communities from the same section.

    Read the article

  • Dual Boot, Dual Hard Drives!

    - by Mars
    I'm posting this question after reading most of similar ones. My situation is different here in the fact that I'm installing on SSD and not partitioning my HDD, and that I can actually boot! I'm just looking to improve the convenience of having easier way to choose. 1- I have a Dell Inspiron 15R SE. It has HDD (1TB) and SSD (32GB). I managed to do whatever things I did in distant past to set the SSD free (I don't really care how fast my system boots). Now I wanted to install Linux on the SSD and leave the HDD untouched. It's way too precious for me to mess with it. So, I repartitioned the SSD to: 30GB for /root, 1GB for /swap, and 100MB for /boot. I installed Linux on the root and the GRUB on boot (of the SSD). Now GRUB immediately boots into linux and doesn't allow me to boot to Windows. BUT! If I enable UEFI Boot manager and choose "Windows Boot Manager" after hitting F12, I can boot into Windows 8 normally. I'd say that's pretty ok, except, I'd prefer to have the option to boot into which one or at the very least, default to boot to Windows. 2- I'm concerned that if I now delete the SSD partition, that the boot will break and I won't be able to boot anything! Does this seem like a valid concern? I made that choice of having linux on SSD because I'm going to be training on it, so I expect multiple resets from time to time.

    Read the article

  • How to connect two ubuntu computers with ethernet cable

    - by Lukasz Zaroda
    I'm trying to connect with ethernet cable two computers - desktop and laptop. What I want to do is transfer a lot of data from one to another. The problem is that I'm doing everything from: How to network two Ubuntu computers using ethernet (without a router)? But after that, ping always gives me "Destination host unreachable". I was searching a while but couldn't figure out what is a reason it doesn't work, maybe it's something about my devices or maybe someone will have another idea. Ethernet cable I got with my router. There is a text printed on it: Aurit Data Cable Cat.5 UTP 26AWG 4PAIR AWM PUC 75°C EIA/TIA 568B It's connecting now my desktop to router, so I can send this question. My desktop: System: Ubuntu 12.04 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 03) "ethtool -i eth0" output: driver: r8169 version: 2.3LK-NAPI firmware-version: rtl_nic/rtl8168d-1.fw bus-info: 0000:01:00.0 supports-statistics: yes supports-test: no supports-eeprom-access: no supports-register-dump: yes My laptop: System: Ubuntu 14.04 Ethernet controller: Qualcomm Atheros AR8162 Fast Ethernet (rev 08) "ethtool -i eth0" output: driver: alx version: firmware-version: bus-info: 0000:01:00.0 supports-statistics: no supports-test: no supports-eeprom-access: no supports-register-dump: no supports-priv-flags: no My iptables are accepting everything. Any ideas why I cannot reach other computer?

    Read the article

  • Banshee crashes consistently - is there a fix?

    - by user36334
    Since updating to ubuntu 11.10 I've had trouble with banshee. In particular when I run it I find that it crashes within an hour without fail. I get the following Unhandled Exception: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.NullReferenceException: Object reference not set to an instance of an object at Mono.Zeroconf.Providers.AvahiDBus.BrowseService.DisposeResolver () [0x00000] in <filename unknown>:0 at Mono.Zeroconf.Providers.AvahiDBus.BrowseService.Dispose () [0x00000] in <filename unknown>:0 at Mono.Zeroconf.Providers.AvahiDBus.ServiceBrowser.OnItemRemove (Int32 interface, Protocol protocol, System.String name, System.String type, System.String domain, LookupResultFlags flags) [0x00000] in <filename unknown>:0 at (wrapper managed-to-native) System.Reflection.MonoMethod:InternalInvoke (System.Reflection.MonoMethod,object,object[],System.Exception&) at System.Reflection.MonoMethod.Invoke (System.Object obj, BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in <filename unknown>:0 --- End of inner exception stack trace --- at System.Reflection.MonoMethod.Invoke (System.Object obj, BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in <filename unknown>:0 at System.Reflection.MethodBase.Invoke (System.Object obj, System.Object[] parameters) [0x00000] in <filename unknown>:0 at System.Delegate.DynamicInvokeImpl (System.Object[] args) [0x00000] in <filename unknown>:0 at System.MulticastDelegate.DynamicInvokeImpl (System.Object[] args) [0x00000] in <filename unknown>:0 at System.Delegate.DynamicInvoke (System.Object[] args) [0x00000] in <filename unknown>:0 at NDesk.DBus.Connection.HandleSignal (NDesk.DBus.Message msg) [0x00000] in <filename unknown>:0 at NDesk.DBus.Connection.DispatchSignals () [0x00000] in <filename unknown>:0 at NDesk.DBus.Connection.Iterate () [0x00000] in <filename unknown>:0 at Mono.Zeroconf.Providers.AvahiDBus.DBusManager.IterateThread (System.Object o) [0x00000] in <filename unknown>:0 Does anyone else also have this problem?

    Read the article

  • ClearTrace Supports Statement Level Events

    - by Bill Graziano
    One of the requests I get on a regular basis is to capture the performance of statement level events.  The latest beta has this feature available.  If you’re interested in this I’d like to get some feedback. I handle the SP:StmtCompleted and the SQL:StmtCompleted events.  These report CPU, reads, writes and duration. I’m not in any way saying it’s a good idea to trace these events.  Use with caution as this can make your traces much larger. If there are statement level events in the trace file they will be processed.  However the query screen displays batch level *OR* statement level events.  If it did both we’d be double counting. I don’t have very many traces with statement completed events in them.  That means I only did limited testing of how it parses these events.  It seems to work well so far though.  Your feedback is appreciated. If you ever write loops or cursors in stored procedures you’re going to get huge trace files.  Be warned. I also fixed an annoying bug where ClearTrace would fail and tell you a value had already been added.  This is a result of the collection I use being case-sensitive and SQL Server not being case-sensitive.  I thought I had properly coded around that but finally realized I hadn’t.  It should be fixed now. If you have any questions or problems the ClearTrace support forum is the best place for those.

    Read the article

  • Meet the New Windows Azure

    - by ScottGu
    Today we are releasing a major set of improvements to Windows Azure.  Below is a short-summary of just a few of them: New Admin Portal and Command Line Tools Today’s release comes with a new Windows Azure portal that will enable you to manage all features and services offered on Windows Azure in a seamless, integrated way.  It is very fast and fluid, supports filtering and sorting (making it much easier to use for large deployments), works on all browsers, and offers a lot of great new features – including built-in VM, Web site, Storage, and Cloud Service monitoring support. The new portal is built on top of a REST-based management API within Windows Azure – and everything you can do through the portal can also be programmed directly against this Web API. We are also today releasing command-line tools (which like the portal call the REST Management APIs) to make it even easier to script and automate your administration tasks.  We are offering both a Powershell (for Windows) and Bash (for Mac and Linux) set of tools to download.  Like our SDKs, the code for these tools is hosted on GitHub under an Apache 2 license. Virtual Machines Windows Azure now supports the ability to deploy and run durable VMs in the cloud.  You can easily create these VMs using a new Image Gallery built-into the new Windows Azure Portal, or alternatively upload and run your own custom-built VHD images. Virtual Machines are durable (meaning anything you install within them persists across reboots) and you can use any OS with them.  Our built-in image gallery includes both Windows Server images (including the new Windows Server 2012 RC) as well as Linux images (including Ubuntu, CentOS, and SUSE distributions).  Once you create a VM instance you can easily Terminal Server or SSH into it in order to configure and customize the VM however you want (and optionally capture your own image snapshot of it to use when creating new VM instances).  This provides you with the flexibility to run pretty much any workload within Windows Azure.   The new Windows Azure Portal provides a rich set of management features for Virtual Machines – including the ability to monitor and track resource utilization within them.  Our new Virtual Machine support also enables the ability to easily attach multiple data-disks to VMs (which you can then mount and format as drives).  You can optionally enable geo-replication support on these – which will cause Windows Azure to continuously replicate your storage to a secondary data-center at least 400 miles away from your primary data-center as a backup. We use the same VHD format that is supported with Windows virtualization today (and which we’ve released as an open spec), which enables you to easily migrate existing workloads you might already have virtualized into Windows Azure.  We also make it easy to download VHDs from Windows Azure, which also provides the flexibility to easily migrate cloud-based VM workloads to an on-premise environment.  All you need to do is download the VHD file and boot it up locally, no import/export steps required. Web Sites Windows Azure now supports the ability to quickly and easily deploy ASP.NET, Node.js and PHP web-sites to a highly scalable cloud environment that allows you to start small (and for free) and then scale up as your traffic grows.  You can create a new web site in Azure and have it ready to deploy to in under 10 seconds: The new Windows Azure Portal provides built-in administration support for Web sites – including the ability to monitor and track resource utilization in real-time: You can deploy to web-sites in seconds using FTP, Git, TFS and Web Deploy.  We are also releasing tooling updates today for both Visual Studio and Web Matrix that enable developers to seamlessly deploy ASP.NET applications to this new offering.  The VS and Web Matrix publishing support includes the ability to deploy SQL databases as part of web site deployment – as well as the ability to incrementally update database schema with a later deployment. You can integrate web application publishing with source control by selecting the “Set up TFS publishing” or “Set up Git publishing” links on a web-site’s dashboard: Doing do will enable integration with our new TFS online service (which enables a full TFS workflow – including elastic build and testing support), or create a Git repository that you can reference as a remote and push deployments to.  Once you push a deployment using TFS or Git, the deployments tab will keep track of the deployments you make, and enable you to select an older (or newer) deployment and quickly redeploy your site to that snapshot of the code.  This provides a very powerful DevOps workflow experience.   Windows Azure now allows you to deploy up to 10 web-sites into a free, shared/multi-tenant hosting environment (where a site you deploy will be one of multiple sites running on a shared set of server resources).  This provides an easy way to get started on projects at no cost. You can then optionally upgrade your sites to run in a “reserved mode” that isolates them so that you are the only customer within a virtual machine: And you can elastically scale the amount of resources your sites use – allowing you to increase your reserved instance capacity as your traffic scales: Windows Azure automatically handles load balancing traffic across VM instances, and you get the same, super fast, deployment options (FTP, Git, TFS and Web Deploy) regardless of how many reserved instances you use. With Windows Azure you pay for compute capacity on a per-hour basis – which allows you to scale up and down your resources to match only what you need. Cloud Services and Distributed Caching Windows Azure also supports the ability to build cloud services that support rich multi-tier architectures, automated application management, and scale to extremely large deployments.  Previously we referred to this capability as “hosted services” – with this week’s release we are now referring to this capability as “cloud services”.  We are also enabling a bunch of new features with them. Distributed Cache One of the really cool new features being enabled with cloud services is a new distributed cache capability that enables you to use and setup a low-latency, in-memory distributed cache within your applications.  This cache is isolated for use just by your applications, and does not have any throttling limits. This cache can dynamically grow and shrink elastically (without you have to redeploy your app or make code changes), and supports the full richness of the AppFabric Cache Server API (including regions, high availability, notifications, local cache and more).  In addition to supporting the AppFabric Cache Server API, it also now supports the Memcached protocol – allowing you to point code written against Memcached at it (no code changes required). The new distributed cache can be setup to run in one of two ways: 1) Using a co-located approach.  In this option you allocate a percentage of memory in your existing web and worker roles to be used by the cache, and then the cache joins the memory into one large distributed cache.  Any data put into the cache by one role instance can be accessed by other role instances in your application – regardless of whether the cached data is stored on it or another role.  The big benefit with the “co-located” option is that it is free (you don’t have to pay anything to enable it) and it allows you to use what might have been otherwise unused memory within your application VMs. 2) Alternatively, you can add “cache worker roles” to your cloud service that are used solely for caching.  These will also be joined into one large distributed cache ring that other roles within your application can access.  You can use these roles to cache 10s or 100s of GBs of data in-memory very effectively – and the cache can be elastically increased or decreased at runtime within your application: New SDKs and Tooling Support We have updated all of the Windows Azure SDKs with today’s release to include new features and capabilities.  Our SDKs are now available for multiple languages, and all of the source in them is published under an Apache 2 license and and maintained in GitHub repositories. The .NET SDK for Azure has in particular seen a bunch of great improvements with today’s release, and now includes tooling support for both VS 2010 and the VS 2012 RC. We are also now shipping Windows, Mac and Linux SDK downloads for languages that are offered on all of these systems – allowing developers to develop Windows Azure applications using any development operating system. Much, Much More The above is just a short list of some of the improvements that are shipping in either preview or final form today – there is a LOT more in today’s release.  These include new Virtual Private Networking capabilities, new Service Bus runtime and tooling support, the public preview of the new Azure Media Services, new Data Centers, significantly upgraded network and storage hardware, SQL Reporting Services, new Identity features, support within 40+ new countries and territories, and much, much more. You can learn more about Windows Azure and sign-up to try it for free at http://windowsazure.com.  You can also watch a live keynote I’m giving at 1pm June 7th (later today) where I’ll walk through all of the new features.  We will be opening up the new features I discussed above for public usage a few hours after the keynote concludes.  We are really excited to see the great applications you build with them. Hope this helps, Scott

    Read the article

  • Running multiple box2D world objects on a server

    - by CharbelAbdo
    I'm creating a multiplayer game using LibGdx (with Box2d) and Kryonet. Since this is the first time I work on multiplayer games, I read a bit about server - client implementations, and it turns out that the server should handle important tasks like collision detection, hits, characters dying etc... Based on some articles (like the excellent Gabriel Gambetta Fast paced multiplayer series), I also know that the client should work in parallel to avoid the lag while the server responds to commands. Physics wise, each game will have 2 players, and any projectiles fired. What I'm thinking of doing is the following: Create a physics world on the client When the game is signaled to start, I create the same physics world on the server (without any rendering obviously). Whenever the player issues a command (move or fire), I send the command to the server and immediately start processing it on the client. When the server receives the command, it applies it on the server's world (set velocity etc...) Each 100ms, the server sends the new state to the client which corrects what was calculated locally. Any critical action (hit, death, level up) is calculated only on the server and sent to the client. Essentially, I would have a Box2d World object running on the server for each game in progress, in sync with the worlds running on the clients. The alternative would be to do my own calculations on the server instead of relying on Box2D to do them for me, but I'm trying to avoid that. My question is: Is it wise to have, for example, 1000 instances of the World object running and executing steps on the server? Tomcat used around 750 MBytes of memory when trying it without any object added to the world. Anybody tried that before? If not, is there any alternative? Google did not help me, are there any guidelines to use when you want to have physics on both the client and the server? Thanks for any help.

    Read the article

  • Metaphor for task synchronization [closed]

    - by nkint
    I'm looking for a metaphor. A friend of mine taught me to use metaphors from nature, everyday life, math, and use them to design my projects. They can help in creating a better design or better understanding or the problem, and they are cool. Now I'm working on a project with hardware and micro-controllers in C. For convenience, I have decided to use multiple micro-controllers as co-processor units for real-time (the slaves) and a master. This has saved me a lot of headache: I can code the main logic in the master without paying too much attention to super optimizing everything; I don't care if I need some blocking-call; I don't worry about serial communication with the computer. I just send messages to the slaves and they are super fast super in real time. I like my design and it seems to work well. So here are the important concepts that I'm trying capture in the metaphor: hierarchy of processing Not using one big brain but rather several small, distributed brain units using distributed power or resources I'm looking for a good metaphor for this concept of having one unit synchronize the work of all the others. Preferably, the metaphor would come from nature, biology, or zoology.

    Read the article

  • You Can&rsquo;t Upload An Empty File To SharePoint 2007 Or SharePoint 2010

    - by Brian Jackett
    The title of this post is pretty self explanatory, but I thought it worth mentioning since I had never run across this rule until just recently.  A few weeks ago I was testing out a new workflow attached to a SharePoint 2007 document library.  I uploaded various file types to ensure all were handled properly.  One of the files I happened to test with was an empty .txt file to which I got the following error.      As you can see from the error message you aren’t allowed to upload a file that is empty.  Fast forward to this week when I was doing some research for my upcoming SharePoint 2010 beta exams.  I remembered that error I got a few weeks ago and decided to try out with SharePoint 2010 as well.  No surprises I got a similar error. Conclusion     Next time you are uploading files to a SharePoint 2007 or 2010 document library, make sure the file is not empty.  Coincidentally when I tweeted about this issue a few friends replied that they had also found this error recently.  I don’t know the internal reasoning why this is prevented but I assume it has something to do with how the blob for the file is stored in the database.  I assume that this would still be the case even if you had Remote Blob Storage (RBS) configured for your farm, but don’t have access to such a farm to confirm.  If anyone reading this does have access and wants to confirm that would be appreciated, just leave a comment.         -Frog Out

    Read the article

  • What partition to use to keep data files in Ubuntu?

    - by Martin Lee
    I have been using Ubuntu for a few years and usually my partition set up was the following: Ext3 or Ext4 partition for the system itself (20 GB); A 10 GB swap partition; a big FAT32 partition to store movies, photos, work stuff, etc. (depends on the capacity of the disk, but usually it is what is left from Ext3+Swap, currently it is more than 200 GB). Does this setup sound right? I am considering to switching to one big Ext3 partition now, because the problem with Fat32 in Ubuntu has not gone anywhere: for example, right now I can access my 'big' partition with a 'Data' label only through /media/_themes?END. Pretty strange name for a partition, isn't it? some Linux software fail to read/write on this partition. For example, if I want to play around with rebar and build/make/compile things on this FAT32 partition, it will always complain about permissions and won't work (the same goes for many other kinds of software); it is not stable, I can not refer to some files on this FAT32 partition, because after the next reboot it will be called not '_themes?END', but something else. On the other side I usually begin to run out of space on the Ext3 partition after a few months of usage. So, the question is - what is the best setup of partitions for an Ubuntu system? Should a FAT32 partition be used at all?

    Read the article

  • Engineered to Inform, Inspire, Entertain

    - by Oracle OpenWorld Blog Team
    by Karen Shamban Take note! Oracle OpenWorld keynote lineup announced  The lineup for the keynotes at this year's Oracle OpenWorld conference has just been announced.  Expert speakers will provide insights into industry trends, the latest technology developments and futures, as well as key strategies for achieving business efficiency and innovation. Critical business drivers such as engineered systems, cloud computing, customer experience, and business analytics and big data will be featured topics. Executive keynotes include: Oracle CEO Larry Ellison on "Hardware and Software, Engineered to Work Together: Why It's a Different Approach" and "The Oracle Cloud: Where Social is Built In" Oracle President Mark Hurd discussing "Shift Complexity" with SVP of Oracle Database Development Andrew Mendelsohn,  and "See More, Act Faster: Oracle Business Analytics" Oracle EVP of Product Development Thomas Kurian focusing on "The Oracle Cloud: Oracle's Cloud Platform and Applications Strategy" Oracle EVP of Systems John Fowler, Oracle Chief Corporate Architect Edward Screven, and Oracle SVP of Systems Technology Juan Loiaza on "Oracle Cloud Infrastructure and Engineered Systems: Fast, Reliable, Virtualized" For more information on speakers, topics, and schedule, go to the Oracle OpenWorld Keynotes page.

    Read the article

  • How to provide value?

    - by Francisco Garcia
    Before I became a consultant all I cared about was becoming a highly skilled programmer. Now I believe that what my clients need is not a great hacker, coder, architect... or whatever. I am more and more convinced every day that there is something of greater value. Everywhere I go I discover practices where I used to roll my eyes in despair. I saw the software industry with pink glasses and laughed or cried at them depending on my mood. I was so convinced everything could be done better. Now I believe that what my clients desperately need is finding a balance between good engineering practices and desperate project execution. Although a great design can make a project cheap to maintain thought many years, usually it is more important to produce quick fast and cheap, just to see if the project can succeed. Before that, it does not really matters that much if the design is cheap to maintain, after that, it might be too late to improve things. They need people who get involved, who do some clandestine improvements into the project without their manager approval/consent/knowledge... because they are never given time for some tasks we all know are important. Not all good things can be done, some of them must come out of freewill, and some of them must be discussed in order to educate colleagues, managers, clients and ourselves. Now my big question is. What exactly are the skills and practices aside from great coding that can provide real value to the economical success of software projects? (and not the software architecture alone)

    Read the article

  • Create Pivot collections much faster than DeepZoomTools CollectionCreator class

    - by John Conwell
    I've been playing with Microsoft Live Labs Pivot to create a hierarchy of collections all linked together to allow someone to explore a hierarchy of data visually. The problem has been the generation time of the entire hierarchy. I end up creating 500 - 600 collections total and it takes hours and hours using the CollectionCreator class that comes with the DeepZoomTools.  So digging around I found a way to make the actual DeepZoom collection creation wicked fast. Dont use the CollectionCreator!  Turns out Pivot doesnt actually use the image pyramid generated by the CollectionCreator. Or if it does, its only when you open a new collection it shows all the images zooming in. But once the zoom in is complete, Pivot uses the individual DeepZoom images. What Pivot does need is the xml generated by the CollectionCreator, which is in a very simple format.  So what i did was manually generate the xml for the collection image pyramid, and then create the folder structure required (one folder per level of the pyramid), and put a single pixel png file in each folder.  Now, I can create the required files and folders for 500 collections in about 10 seconds. Sweet! Now you still have to use the ImageCreator to create a DeepZoom image for each image in the collection and that still takes some time, but at least the total processing time is way better.

    Read the article

  • SMART: DISK FAILURE IS IMMINENT (under 24 hours?)

    - by flix
    I have on my hard drive 2 OSes: Ubuntu 12.04 and Windows Vista( I keep it just because of school). Everything was OK on both OSes,but one day on Ubuntu I was getting awkward noises from my notebooks's hard drive and then everything stops and I couldn't do anything. On Windows everything was ok. Everytime I boot on Ubuntu I can get 5 minutes of normal run, without problems. After that the hard drive sounds crazy and nothing works. I could run S.M.A.R.T tests from a older Ubuntu CD (10.04) from the GUI(Disk Utility, or something like that and from terminal). From the GUI I got that the DISK FAILURE IS IMMINENT and I have ~700 bad blocks(or broken blocks, I had that test I while ago) on my HDD. From the terminal ( I don't remember if it was fsck or a SMART test command) I got that the HDD will fail in under 24 hours. Since then it passed 2-3 weeks. I've tried "badblocks" but after 10 hours it was still running and I had to stop it. Now I have to use cygwin and other alternatives for my linux apps on Windows. PLEASE HELP!!! How can I separate the bad blocks from Ubuntu so it wouldn't use them?

    Read the article

  • Use CompiledQuery.Compile to improve LINQ to SQL performance

    - by Michael Freidgeim
    After reading DLinq (Linq to SQL) Performance and in particular Part 4  I had a few questions. If CompiledQuery.Compile gives so much benefits, why not to do it for all Linq To Sql queries? Is any essential disadvantages of compiling all select queries? What are conditions, when compiling makes whose performance, for how much percentage? World be good to have default on application config level or on DBML level to specify are all select queries to be compiled? And the same questions about Entity Framework CompiledQuery Class. However in comments I’ve found answer  of the author ricom 6 Jul 2007 3:08 AM Compiling the query makes it durable. There is no need for this, nor is there any desire, unless you intend to run that same query many times. SQL provides regular select statements, prepared select statements, and stored procedures for a reason.  Linq now has analogs. Also from 10 Tips to Improve your LINQ to SQL Application Performance   If you are using CompiledQuery make sure that you are using it more than once as it is more costly than normal querying for the first time. The resulting function coming as a CompiledQuery is an object, having the SQL statement and the delegate to apply it.  And your delegate has the ability to replace the variables (or parameters) in the resulting query. However I feel that many developers are not informed enough about benefits of Compile. I think that tools like FxCop and Resharper should check the queries  and suggest if compiling is recommended. Related Articles for LINQ to SQL: MSDN How to: Store and Reuse Queries (LINQ to SQL) 10 Tips to Improve your LINQ to SQL Application Performance Related Articles for Entity Framework: MSDN: CompiledQuery Class Exploring the Performance of the ADO.NET Entity Framework - Part 1 Exploring the Performance of the ADO.NET Entity Framework – Part 2 ADO.NET Entity Framework 4.0: Making it fast through Compiled Query

    Read the article

< Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >