Search Results

Search found 25362 results on 1015 pages for 'compiling from source'.

Page 602/1015 | < Previous Page | 598 599 600 601 602 603 604 605 606 607 608 609  | Next Page >

  • How do I change OSX package install directory?

    - by Scott
    It drives me nuts that every time I download a binary to run on OSX that it wants to install in system directories. ~/Applications is a perfectly fine place to install and doesn't require blindly authenticating somebody else's binary. Is there a way to change the install directory for packages? On a few I've been able to open the package and edit the plist to install it elsewhere, but that doesn't work universally. I install from source when I can, but it isn't always an option. Is there a good way to force the installer to use ~/Applications?

    Read the article

  • Why is my Tiled map distorted when rendered with LibGDX?

    - by Sean
    I have a Tiled map that looks like this in the editor: But when I load it using an AssetManager (full static source available on GitHub) it appears completely askew. I believe the relevant portion of the code is below. This is the entire method; the others are either empty or might as well be. private OrthographicCamera camera; private AssetManager assetManager; private BitmapFont font; private SpriteBatch batch; private TiledMap map; private TiledMapRenderer renderer; @Override public void create() { float w = Gdx.graphics.getWidth(); float h = Gdx.graphics.getHeight(); camera = new OrthographicCamera(); assetManager = new AssetManager(); batch = new SpriteBatch(); font = new BitmapFont(); camera.setToOrtho(false, (w / h) * 10, 10); camera.update(); assetManager.setLoader(TiledMap.class, new TmxMapLoader( new InternalFileHandleResolver())); assetManager.load(AssetInfo.ICE_CAVE.assetPath, TiledMap.class); assetManager.finishLoading(); map = assetManager.get(AssetInfo.ICE_CAVE.assetPath); renderer = new IsometricTiledMapRenderer(map, 1f/64f); }

    Read the article

  • Generating a record of the full(-ish) package management state

    - by intuited
    I'm about to make some system changes and I'd like to have a record of my current happy system state. Is there a convenient way to create a record of this? I'd like to keep track of info like currently installed packages and their versions which packages are pinned at what version which source (as in /etc/apt/sources.list) they were installed from whether they were installed directly or automatically installed as a dependency of a different package "unknown unknowns": ie stuff that I don't know that I should be keeping track of but which may be important when trying to figure out why something doesn't work In short, I'd like to keep as much of the aptitude database as possible. What's the best way to do this? It would be nice if the resulting records were easily readable, though this is not really essential. It would be extra nice if it were readily versionable through an SCM tool like git. There is a superuser question that partially answers this, but it only provides the list of currently installed packages.

    Read the article

  • Generating a record of the full(-ish) package management state

    - by intuited
    I'm about to make some system changes and I'd like to have a record of my current happy system state. Is there a convenient way to create a record of this? I'd like to keep track of info like currently installed packages and their versions which packages are pinned at what version which source (as in /etc/apt/sources.list) they were installed from whether they were installed directly or automatically installed as a dependency of a different package "unknown unknowns": ie stuff that I don't know that I should be keeping track of but which may be important when trying to figure out why something doesn't work In short, I'd like to keep as much of the aptitude database as possible. What's the best way to do this? It would be nice if the resulting records were easily readable, though this is not really essential. It would be extra nice if it were readily versionable through an SCM tool like git. There is a superuser question that partially answers this, but it only provides the list of currently installed packages.

    Read the article

  • network timeout how to analyse problem

    - by elhombre
    In the last three month I was experiencing that my internet connection started to get very slow and websites had long time to load. The first thing I made was an ping to www.google.com which showed that I was loosing pakets. Here some of the results: 64 bytes from 74.125.39.103: icmp_seq=2909 ttl=53 time=48.222 ms Request timeout for icmp_seq 2910 Request timeout for icmp_seq 2911 64 bytes from 74.125.39.103: icmp_seq=2912 ttl=53 time=44.372 ms Days later I had to reset my router because it wasn't able to establish a correct network connection. It was after the reset when things worked again for some days. But later the same network timeouts started to happen again. I would like to know how I can analyze the problem to get to the source which is causing this timeouts. Which steps do you take to circle in this Problem? My Network Laptop - Wireless Router modem - ISP EDIT: I am on a Mac OS X 10.6

    Read the article

  • What is the best way to build a database from a MS Word document?

    - by Jayron Soares
    Please advise me on how to approach this problem: I have a sequential list of metadata in a document in MS Word. The basic idea is to create a Python algorithm to iterate over the information, retrieving just the name of the PROCESS, when is made a queue, from a database. Example metadata: Process: Process Walker (1965) Exact reference: Walker Process Equipment., Inc. v. Food Machinery Corp. Link: http://caselaw.lp.findlaw.com/scripts/getcase.pl?court=US&vol=382&invol= Type of procedure: Certiorari to the United States Court of Appeals for the Seventh Circuit. Parties: Walker Process Equipment, Inc. Sector: Systems is ... Start Date: October 12-13 Arguedas, 1965 Summary: Food Machinery Company has initiated a process to stop or slow the entry of competitors through the use of a patent obtained by fraud. The case concerned a patent on "knee action swing diffusers" used in aeration equipment for sewage treatment systems, and the question was whether "the maintenance and enforcement of a patent obtained by fraud before the patent office" may be a basis for antitrust punishment. Report of the evolution process: petitioner, in answer to respond... Importance: a) First case which established an analysis for the diagnosis of dispute… There are about 200 pages containing the information above. I have in mind the idea of implementing an algorithm in Python to be able to break this information sequence and try to store it in a web database (an open source application that I’m looking for) in order to allow for free consultations.

    Read the article

  • Packet loss with Broadcom wireless and Ubuntu

    - by stevemegson
    I have a machine with Ubuntu 9.04 and a Broadcom BCM4306 wireless card which has been running quite happily for many months with the b43legacy driver that's suggested by Hardware Drivers. Suddenly, I'm now seeing crippling packet loss, varying between 30% and 90% when pinging the router. My WinXP machine is still quite happy even when sat next to the Ubuntu box, so I assume that it's not some new source of interference. What else is likely to have changed? How should I go about diagnosing and fixing this?

    Read the article

  • How to read Scala code with lots of implicits?

    - by Petr Pudlák
    Consider the following code fragment (adapted from http://stackoverflow.com/a/12265946/1333025): // Using scalaz 6 import scalaz._, Scalaz._ object Example extends App { case class Container(i: Int) def compute(s: String): State[Container, Int] = state { case Container(i) => (Container(i + 1), s.toInt + i) } val d = List("1", "2", "3") type ContainerState[X] = State[Container, X] println( d.traverse[ContainerState, Int](compute) ! Container(0) ) } I understand what it does on high level. But I wanted to trace what exactly happens during the call to d.traverse at the end. Clearly, List doesn't have traverse, so it must be implicitly converted to another type that does. Even though I spent a considerable amount of time trying to find out, I wasn't very successful. First I found that there is a method in scalaz.Traversable traverse[F[_], A, B] (f: (A) => F[B], t: T[A])(implicit arg0: Applicative[F]): F[T[B]] but clearly this is not it (although it's most likely that "my" traverse is implemented using this one). After a lot of searching, I grepped scalaz source codes and I found scalaz.MA's method traverse[F[_], B] (f: (A) => F[B])(implicit a: Applicative[F], t: Traverse[M]): F[M[B]] which seems to be very close. Still I'm missing to what List is converted in my example and if it uses MA.traverse or something else. The question is: What procedure should I follow to find out what exactly is called at d.traverse? Having even such a simple code that is so hard analyze seems to me like a big problem. Am I missing something very simple? How should I proceed when I want to understand such code that uses a lot of imported implicits? Is there some way to ask the compiler what implicits it used? Or is there something like Hoogle for Scala so that I can search for a method just by its name?

    Read the article

  • Quick Outline: Navigating Your PL/SQL Packages in Oracle SQL Developer

    - by thatjeffsmith
    If you’re browsing your packages using the Connections panel, you have a nice tree navigator to click around your packages and your variable, procedure, and functions. Click, click, click all day long, click, click, click while I sing this song… But What if you drill into your PL/SQL source from the worksheet and don’t have the Tree expanded? Let’s say you’re working on your script, something like - Hmm, what goes next again? So I need to reacquaint myself with just what my beer package requires, so I’m going to drill into it by doing a DESCRIBE (via SHIFT+F4), and now I have the package open. The package is open but the tree hasn’t auto-expanded. Please don’t tell me I have to do the click-click-click thing in the tree!?! Just Open the Quick Outline Panel Do you see it? Just right click in the procedure editor – select the ‘Quick Outline’ in the context menu, and voila! The navigational power of the tree, without needing to drill down the tree itself. If I want to drill into my procedure declaration, just click on said procedure name in the Quick Outline panel. This works for both package specs and bodies. Technically you can use this for stand alone procedures and functions, but the real power is demonstrated for packages.

    Read the article

  • Leaving Microsoft

    - by Stephen Walther
    After two and a half years working with the ASP.NET team, I’ve decided that this is the right time to leave Microsoft and, with the help of some friends, re-launch my ASP.NET training and consulting company. The company has the modest name Superexpert. While working on my Ph.D. at MIT, I was surrounded by professors and students who were passionate about knowledge. During the Internet boom, I was lucky enough to work side-by-side with some very smart and hard-working people to create several successful startups. However, the people I worked with at Microsoft were among the smartest and hardest working. Microsoft hires a small number of people and gives them huge responsibilities. It continues to amaze me that so few people work on the ASP.NET team when you consider how much the team produces. I had the opportunity to work with a number of inspiring people at Microsoft. I’ll miss working with Scott Hunter, Dave Reed, Boris Moore, Eilon Lipton, Scott Guthrie, James Senior, Jim Wang, Phil Haack, Damian Edwards, Vishal Joshi, Mike Pope, Jon Young, Dmitry Robsman, Simon Calvert, Stefan Schackow, and many others. I’m proud of what we accomplished while I was working at Microsoft. We reached out to the jQuery team and changed direction from Microsoft Ajax to jQuery. We successfully contributed several important new features to the open-source jQuery project including jQuery Templates, jQuery Data-Linking, jQuery Globalization, and (as John Resig announced at the last jQuery conference) jQuery Require. I’m looking forward to returning to training and consulting. We want to focus on providing consulting on the “right way” of building ASP.NET websites, which we call Modern ASP.NET applications. By Modern ASP.NET applications, I mean applications built with ASP.NET MVC, jQuery, HTML5, and Visual Studio ALM. Additionally, we want to help companies that have existing ASP.NET Web Forms applications migrate to ASP.NET MVC. If you are interested in having us provide training for your company or you need help building a custom ASP.NET application then please contact us at [email protected] or visit our website at Superexpert.com.

    Read the article

  • Profile Picture Thumbnails, Following Projects, and Fork Collaboration

    [Do you tweet? Follow us on Twitter @matthawley and @adacole_msft] We deployed a new version of the CodePlex website last week. Profile Picture Thumbnails We have added a way to select a thumbnail from your profile picture, which will start appearing next to usernames across the site.  Managing your thumbnail is simple. From your profile page, choose Edit your profile.  On the left side, you’ll find an intuitive widget for choosing a profile picture, uploading it, and editing your thumbnail image. If you previously uploaded a profile picture, we’ve used that to generate a starter thumbnail. We welcome your suggestions and ideas for areas where seeing user thumbnails would be useful or interesting. Following Projects Based on some feedback we’ve received recently, we have taken several steps to help you discover and follow interesting and popular projects on CodePlex: The homepage now surfaces the top Projects Users are Following from the previous 7 days. When you visit any project homepage, you can see at a glance how many people follow the project. When you visit the People tab for any project, you will see both the project contributors and the 25 most recent project followers. Fork Collaboration We now support enabling collaborators on a fork based on a large number of user requests.  From the Source Code management page for your fork, you will now see the following on the right side: To add a collaborator, type in a username and click Add. All fork collaborators will have the ability to push to the fork and send/cancel pull requests.  To remove a collaborator, hover over user, and click on the X that appears: The CodePlex team values your feedback, and is frequently monitoring Twitter, our Discussions and Issue Tracker for new features or problems. If you’ve not visited the Issue Tracker recently, please take a few moments to log an idea or vote for the features you would most like to see implemented on CodePlex.

    Read the article

  • Why am I getting this "Connection to PulseAudio failed" error?

    - by Dave M G
    I have a computer that runs Mythbuntu 11.10. It has an external USB Kenwood Digital Audio device. When I open up pavucontrol, I get this message: If I do as the message suggests and run start-pulseaudio-x11, I get this output: $ start-pulseaudio-x11 Connection failure: Connection refused pa_context_connect() failed: Connection refused How do I correct this error? Update: Somewhere during the course of doing the suggested tests in the comments, a new audio device has now become visible in my sound settings. I have not attached or made any new device, so this must be the result of of some setting change. The device I use and know about is the Kenwood Audio device. The "GF108" device will play sound through the Kenwood anyway, but not reliably: Command line output as requested in the comments: $ ls -l ~/.pulse* -rw------- 1 mythbuntu mythbuntu 256 Feb 28 2011 /home/mythbuntu/.pulse-cookie /home/mythbuntu/.pulse: total 200 -rw-r--r-- 1 mythbuntu mythbuntu 8192 Oct 23 01:38 2b98330d36bf53bb85c97fc300000008-card-database.tdb -rw-r--r-- 1 mythbuntu mythbuntu 69 Nov 16 22:51 2b98330d36bf53bb85c97fc300000008-default-sink -rw-r--r-- 1 mythbuntu mythbuntu 68 Nov 16 22:51 2b98330d36bf53bb85c97fc300000008-default-source -rw-r--r-- 1 mythbuntu mythbuntu 49152 Oct 14 12:30 2b98330d36bf53bb85c97fc300000008-device-manager.tdb -rw-r--r-- 1 mythbuntu mythbuntu 61440 Oct 23 01:40 2b98330d36bf53bb85c97fc300000008-device-volumes.tdb lrwxrwxrwx 1 mythbuntu mythbuntu 23 Nov 16 22:50 2b98330d36bf53bb85c97fc300000008-runtime -> /tmp/pulse-EAwvLIQZn7e8 -rw-r--r-- 1 mythbuntu mythbuntu 77824 Nov 1 12:54 2b98330d36bf53bb85c97fc300000008-stream-volumes.tdb And yet more requested command line output: $ ps auxw|grep pulse 1000 2266 0.5 0.2 294184 9152 ? S<l Nov16 4:26 pulseaudio -D 1000 2413 0.0 0.0 94816 3040 ? S Nov16 0:00 /usr/lib/pulseaudio/pulse/gconf-helper 1000 4875 0.0 0.0 8108 908 pts/0 S+ 12:15 0:00 grep --color=auto pulse

    Read the article

  • MacBook Pro Boot Camp SPDIF passthrough?

    - by Ryan Zink
    I'm using Windows 7 through Boot Camp on a unibody Macbook Pro and am having problems using the SPDIF output. I get the expected Dolby Digital or DTS in some movies, but in other movies and in games (Source engine, StarCraft 2) where the output is enabled to 5.1, the output invariably shows up as Dolby Pro Logic, which means (I think) that passthrough is not enabled. The boot camp drivers for the sound card don't have any sort of control panel, and the Windows settings for enabling DTS and Dolby seem to work when I test those outputs in the sound settings. Is there some other setting or utility I can use to enable SPDIF passthrough for all programs?

    Read the article

  • Failed to download repository information Check your Internet connection

    - by Luca Brazza
    I need to check if I have updates for Ubuntu. I think it is 11.05 As you can see this is what it says: Failed to download repository information Check your Internet connection. Details: W:Failed to fetch cdrom://Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)/dists/precise/main/binary-amd64/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch cdrom://Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)/dists/precise/restricted/binary-amd64/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch cdrom://Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)/dists/precise/main/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch cdrom://Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)/dists/precise/restricted/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch http://ppa.launchpad.net/ferramroberto/java/ubuntu/dists/precise/main/source/Sources 404 Not Found , W:Failed to fetch http://ppa.launchpad.net/ferramroberto/java/ubuntu/dists/precise/main/binary-i386/Packages 404 Not Found , E:Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • The fastest way to resize images from ASP.NET. And it’s (more) supported-ish.

    - by Bertrand Le Roy
    I’ve shown before how to resize images using GDI, which is fairly common but is explicitly unsupported because we know of very real problems that this can cause. Still, many sites still use that method because those problems are fairly rare, and because most people assume it’s the only way to get the job done. Plus, it works in medium trust. More recently, I’ve shown how you can use WPF APIs to do the same thing and get JPEG thumbnails, only 2.5 times faster than GDI (even now that GDI really ultimately uses WIC to read and write images). The boost in performance is great, but it comes at a cost, that you may or may not care about: it won’t work in medium trust. It’s also just as unsupported as the GDI option. What I want to show today is how to use the Windows Imaging Components from ASP.NET APIs directly, without going through WPF. The approach has the great advantage that it’s been tested and proven to scale very well. The WIC team tells me you should be able to call support and get answers if you hit problems. Caveats exist though. First, this is using interop, so until a signed wrapper sits in the GAC, it will require full trust. Second, the APIs have a very strong smell of native code and are definitely not .NET-friendly. And finally, the most serious problem is that older versions of Windows don’t offer MTA support for image decoding. MTA support is only available on Windows 7, Vista and Windows Server 2008. But on 2003 and XP, you’ll only get STA support. that means that the thread safety that we so badly need for server applications is not guaranteed on those operating systems. To make it work, you’d have to spin specialized threads yourself and manage the lifetime of your objects, which is outside the scope of this article. We’ll assume that we’re fine with al this and that we’re running on 7 or 2008 under full trust. Be warned that the code that follows is not simple or very readable. This is definitely not the easiest way to resize an image in .NET. Wrapping native APIs such as WIC in a managed wrapper is never easy, but fortunately we won’t have to: the WIC team already did it for us and released the results under MS-PL. The InteropServices folder, which contains the wrappers we need, is in the WicCop project but I’ve also included it in the sample that you can download from the link at the end of the article. In order to produce a thumbnail, we first have to obtain a decoding frame object that WIC can use. Like with WPF, that object will contain the command to decode a frame from the source image but won’t do the actual decoding until necessary. Getting the frame is done by reading the image bytes through a special WIC stream that you can obtain from a factory object that we’re going to reuse for lots of other tasks: var photo = File.ReadAllBytes(photoPath); var factory = (IWICComponentFactory)new WICImagingFactory(); var inputStream = factory.CreateStream(); inputStream.InitializeFromMemory(photo, (uint)photo.Length); var decoder = factory.CreateDecoderFromStream( inputStream, null, WICDecodeOptions.WICDecodeMetadataCacheOnLoad); var frame = decoder.GetFrame(0); We can read the dimensions of the frame using the following (somewhat ugly) code: uint width, height; frame.GetSize(out width, out height); This enables us to compute the dimensions of the thumbnail, as I’ve shown in previous articles. We now need to prepare the output stream for the thumbnail. WIC requires a special kind of stream, IStream (not implemented by System.IO.Stream) and doesn’t directlyunderstand .NET streams. It does provide a number of implementations but not exactly what we need here. We need to output to memory because we’ll want to persist the same bytes to the response stream and to a local file for caching. The memory-bound version of IStream requires a fixed-length buffer but we won’t know the length of the buffer before we resize. To solve that problem, I’ve built a derived class from MemoryStream that also implements IStream. The implementation is not very complicated, it just delegates the IStream methods to the base class, but it involves some native pointer manipulation. Once we have a stream, we need to build the encoder for the output format, which could be anything that WIC supports. For web thumbnails, our only reasonable options are PNG and JPEG. I explored PNG because it’s a lossless format, and because WIC does support PNG compression. That compression is not very efficient though and JPEG offers good quality with much smaller file sizes. On the web, it matters. I found the best PNG compression option (adaptive) to give files that are about twice as big as 100%-quality JPEG (an absurd setting), 4.5 times bigger than 95%-quality JPEG and 7 times larger than 85%-quality JPEG, which is more than acceptable quality. As a consequence, we’ll use JPEG. The JPEG encoder can be prepared as follows: var encoder = factory.CreateEncoder( Consts.GUID_ContainerFormatJpeg, null); encoder.Initialize(outputStream, WICBitmapEncoderCacheOption.WICBitmapEncoderNoCache); The next operation is to create the output frame: IWICBitmapFrameEncode outputFrame; var arg = new IPropertyBag2[1]; encoder.CreateNewFrame(out outputFrame, arg); Notice that we are passing in a property bag. This is where we’re going to specify our only parameter for encoding, the JPEG quality setting: var propBag = arg[0]; var propertyBagOption = new PROPBAG2[1]; propertyBagOption[0].pstrName = "ImageQuality"; propBag.Write(1, propertyBagOption, new object[] { 0.85F }); outputFrame.Initialize(propBag); We can then set the resolution for the thumbnail to be 96, something we weren’t able to do with WPF and had to hack around: outputFrame.SetResolution(96, 96); Next, we set the size of the output frame and create a scaler from the input frame and the computed dimensions of the target thumbnail: outputFrame.SetSize(thumbWidth, thumbHeight); var scaler = factory.CreateBitmapScaler(); scaler.Initialize(frame, thumbWidth, thumbHeight, WICBitmapInterpolationMode.WICBitmapInterpolationModeFant); The scaler is using the Fant method, which I think is the best looking one even if it seems a little softer than cubic (zoomed here to better show the defects): Cubic Fant Linear Nearest neighbor We can write the source image to the output frame through the scaler: outputFrame.WriteSource(scaler, new WICRect { X = 0, Y = 0, Width = (int)thumbWidth, Height = (int)thumbHeight }); And finally we commit the pipeline that we built and get the byte array for the thumbnail out of our memory stream: outputFrame.Commit(); encoder.Commit(); var outputArray = outputStream.ToArray(); outputStream.Close(); That byte array can then be sent to the output stream and to the cache file. Once we’ve gone through this exercise, it’s only natural to wonder whether it was worth the trouble. I ran this method, as well as GDI and WPF resizing over thirty twelve megapixel images for JPEG qualities between 70% and 100% and measured the file size and time to resize. Here are the results: Size of resized images   Time to resize thirty 12 megapixel images Not much to see on the size graph: sizes from WPF and WIC are equivalent, which is hardly surprising as WPF calls into WIC. There is just an anomaly for 75% for WPF that I noted in my previous article and that disappears when using WIC directly. But overall, using WPF or WIC over GDI represents a slight win in file size. The time to resize is more interesting. WPF and WIC get similar times although WIC seems to always be a little faster. Not surprising considering WPF is using WIC. The margin of error on this results is probably fairly close to the time difference. As we already knew, the time to resize does not depend on the quality level, only the size does. This means that the only decision you have to make here is size versus visual quality. This third approach to server-side image resizing on ASP.NET seems to converge on the fastest possible one. We have marginally better performance than WPF, but with some additional peace of mind that this approach is sanctioned for server-side usage by the Windows Imaging team. It still doesn’t work in medium trust. That is a problem and shows the way for future server-friendly managed wrappers around WIC. The sample code for this article can be downloaded from: http://weblogs.asp.net/blogs/bleroy/Samples/WicResize.zip The benchmark code can be found here (you’ll need to add your own images to the Images directory and then add those to the project, with content and copy if newer in the properties of the files in the solution explorer): http://weblogs.asp.net/blogs/bleroy/Samples/WicWpfGdiImageResizeBenchmark.zip WIC tools can be downloaded from: http://code.msdn.microsoft.com/wictools To conclude, here are some of the resized thumbnails at 85% fant:

    Read the article

  • Windows 7 randomly installs an "Unknown Device" successfully

    - by Amazed
    Rarely (several days to weeks between occurrences,) and seemingly at random, I get a balloon notification from Windows 7 (x64 SP1 Home Premium) that it is installing hardware for me. Whatever is being installed does so without error. However, no new hardware has been installed or plugged in! When I click the balloon it doesn't give me any useful information: Looking in the event log, I find this entry: Event ID: 20001 Source: UserPnp Task Category: 7005 Message: Driver Management concluded the process to install driver FileRepository\usb.inf_amd64_neutral_153b489118ee37b8\usb.inf for Device Instance ID USB\VID_0000&PID_0000\6&3AF9A177&0&0060&&02 with the following status: 0x0. It appears to be USB related. My motherboard has both USB 2.0 and 3.0 controllers. My keyboard and mouse are plugged into the 2.0 slots and the data/recharge cable for a tablet (but not the tablet itself) was plugged in to the 3.0 slot. No other USB devices have been attached for several days/reboots. Why is Windows doing this?

    Read the article

  • Storing Entity Framework Entities in a Separate Assembly

    - by Anthony Trudeau
    The Entity Framework has been valuable to me since it came out, because it provided a convenient and powerful way to model against my data source in a consistent way.  The first versions had some deficiencies that for me mostly fell in the category of the tight coupling between the model and its resulting object classes (entities). Version 4 of the Entity Framework pretty much solves this with the support of T4 templates that allow you to implement your entities as self-tracking entities, plain old CLR objects (POCO), et al.  Doing this involves either specifying a new code generation template or implementing them yourselves.  Visual Studio 2010 ships with a self-tracking entities template and a POCO template is available from the Extension Manager.  (Extension Manager is very nice but it's very easy to waste a bunch of time exploring add-ins.  You've been warned.) In a current project I wanted to use POCO; however, I didn't want my entities in the same assembly as the context classes.  It would be nice if this was automatic, but since it isn't here are the simple steps to move them.  These steps detail moving the entity classes and not the context.  The context can be moved in the same way, but I don't see a compelling reason to physically separate the context from my model. Turn off code generation for the template.  To do this set the Custom Tool property for the entity template file to an empty string (the entity template file will be named something like MyModel.tt). Expand the tree for the entity template file and delete all of its items.  These are the items that were automatically generated when you added the template. Create a project for your entities (if you haven't already). Add an existing item and browse to your entity template file, but add it as a link (do not add it directly).  Adding it as a link will allow the model and the template to stay in sync, but the code generation will occur in the new assembly.

    Read the article

  • Using DNS in iproute2

    - by Oliver
    In my setup I can redirect the default gateway based on the source address. Let's say a user is connected through tun0 (10.2.0.0/16) is redirect to another vpn. That works fine! ip rule add from 10.2.0.10 lookup vpn1 In a second rule I redirect the default gateway to another gateway if the user access a certain ip adress: ip rule add from 10.2.0.10 to 94.142.154.71 lookup vpn2 If I access the page on 94.142.154.71 (myip.is) the user is correctly routed and I can see the ip of the second vpn. On any other pages the ip address of vpn1 is shown. But how do I tell iproute2 that all request at e. g. google.com should be redirected through vpn2?

    Read the article

  • What is your best travel (portable) WiFi router?

    - by Sorin Sbarnea
    I'm looking for a small WiFi router that I could use to create my own wireless network when I travel. I'm also interested in a router that has very good signal/antenna and that is easy to take with you. Please do not recommend products just because you can find them on the net, I'm interested about personal experience. Also it's good to know if you can install open-source firmware on it. Selection criteria: size signal quality price Required: to support both 110V/220V power supply, best if embedded or retractable cord.

    Read the article

  • Investigation: Can different combinations of components effect Dataflow performance?

    - by jamiet
    Introduction The Dataflow task is one of the core components (if not the core component) of SQL Server Integration Services (SSIS) and often the most misunderstood. This is not surprising, its an incredibly complicated beast and we’re abstracted away from that complexity via some boxes that go yellow red or green and that have some lines drawn between them. Example dataflow In this blog post I intend to look under that facade and get into some of the nuts and bolts of the Dataflow Task by investigating how the decisions we make when building our packages can affect performance. I will do this by comparing the performance of three dataflows that all have the same input, all produce the same output, but which all operate slightly differently by way of having different transformation components. I also want to use this blog post to challenge a common held opinion that I see perpetuated over and over again on the SSIS forum. That is, that people assume adding components to a dataflow will be detrimental to overall performance. Its not surprising that people think this –it is intuitive to think that more components means more work- however this is not a view that I share. I have always been of the opinion that there are many factors affecting dataflow duration and the number of components is actually one of the less important ones; having said that I have never proven that assertion and that is one reason for this investigation. I have actually seen evidence that some people think dataflow duration is simply a function of number of rows and number of components. I’ll happily call that one out as a myth even without any investigation!  The Setup I have a 2GB datafile which is a list of 4731904 (~4.7million) customer records with various attributes against them and it contains 2 columns that I am going to use for categorisation: [YearlyIncome] [BirthDate] The data file is a SSIS raw format file which I chose to use because it is the quickest way of getting data into a dataflow and given that I am testing the transformations, not the source or destination adapters, I want to minimise external influences as much as possible. In the test I will split the customers according to month of birth (12 of those) and whether or not their yearly income is above or below 50000 (2 of those); in other words I will be splitting them into 24 discrete categories and in order to do it I shall be using different combinations of SSIS’ Conditional Split and Derived Column transformation components. The 24 datapaths that occur will each input to a rowcount component, again because this is the least resource intensive means of terminating a datapath. The test is being carried out on a Dell XPS Studio laptop with a quad core (8 logical Procs) Intel Core i7 at 1.73GHz and Samsung SSD hard drive. Its running SQL Server 2008 R2 on Windows 7. The Variables Here are the three combinations of components that I am going to test:     One Conditional Split - A single Conditional Split component CSPL Split by Month of Birth and income category that will use expressions on [YearlyIncome] & [BirthDate] to send each row to one of 24 outputs. This next screenshot displays the expression logic in use: Derived Column & Conditional Split - A Derived Column component DER Income Category that adds a new column [IncomeCategory] which will contain one of two possible text values {“LessThan50000”,”GreaterThan50000”} and uses [YearlyIncome] to determine which value each row should get. A Conditional Split component CSPL Split by Month of Birth and Income Category then uses that new column in conjunction with [BirthDate] to determine which of the same 24 outputs to send each row to. Put more simply, I am separating the Conditional Split of #1 into a Derived Column and a Conditional Split. The next screenshots display the expression logic in use: DER Income Category         CSPL Split by Month of Birth and Income Category       Three Conditional Splits - A Conditional Split component that produces two outputs based on [YearlyIncome], one for each Income Category. Each of those outputs will go to a further Conditional Split that splits the input into 12 outputs, one for each month of birth (identical logic in each). In this case then I am separating the single Conditional Split of #1 into three Conditional Split components. The next screenshots display the expression logic in use: CSPL Split by Income Category         CSPL Split by Month of Birth 1& 2       Each of these combinations will provide an input to one of the 24 rowcount components, just the same as before. For illustration here is a screenshot of the dataflow containing three Conditional Split components: As you can these dataflows have a fair bit of work to do and remember that they’re doing that work for 4.7million rows. I will execute each dataflow 10 times and use the average for comparison. I foresee three possible outcomes: The dataflow containing just one Conditional Split (i.e. #1) will be quicker There is no significant difference between any of them One of the two dataflows containing multiple transformation components will be quicker Regardless of which of those outcomes come to pass we will have learnt something and that makes this an interesting test to carry out. Note that I will be executing the dataflows using dtexec.exe rather than hitting F5 within BIDS. The Results and Analysis The table below shows all of the executions, 10 for each dataflow. It also shows the average for each along with a standard deviation. All durations are in seconds. I’m pasting a screenshot because I frankly can’t be bothered with the faffing about needed to make a presentable HTML table. It is plain to see from the average that the dataflow containing three conditional splits is significantly faster, the other two taking 43% and 52% longer respectively. This seems strange though, right? Why does the dataflow containing the most components outperform the other two by such a big margin? The answer is actually quite logical when you put some thought into it and I’ll explain that below. Before progressing, a side note. The standard deviation for the “Three Conditional Splits” dataflow is orders of magnitude smaller – indicating that performance for this dataflow can be predicted with much greater confidence too. The Explanation I refer you to the screenshot above that shows how CSPL Split by Month of Birth and salary category in the first dataflow is setup. Observe that there is a case for each combination of Month Of Date and Income Category – 24 in total. These expressions get evaluated in the order that they appear and hence if we assume that Month of Date and Income Category are uniformly distributed in the dataset we can deduce that the expected number of expression evaluations for each row is 12.5 i.e. 1 (the minimum) + 24 (the maximum) divided by 2 = 12.5. Now take a look at the screenshots for the second dataflow. We are doing one expression evaluation in DER Income Category and we have the same 24 cases in CSPL Split by Month of Birth and Income Category as we had before, only the expression differs slightly. In this case then we have 1 + 12.5 = 13.5 expected evaluations for each row – that would account for the slightly longer average execution time for this dataflow. Now onto the third dataflow, the quick one. CSPL Split by Income Category does a maximum of 2 expression evaluations thus the expected number of evaluations per row is 1.5. CSPL Split by Month of Birth 1 & CSPL Split by Month of Birth 2 both have less work to do than the previous Conditional Split components because they only have 12 cases to test for thus the expected number of expression evaluations is 6.5 There are two of them so total expected number of expression evaluations for this dataflow is 6.5 + 6.5 + 1.5 = 14.5. 14.5 is still more than 12.5 & 13.5 though so why is the third dataflow so much quicker? Simple, the conditional expressions in the first two dataflows have two boolean predicates to evaluate – one for Income Category and one for Month of Birth; the expressions in the Conditional Split in the third dataflow however only have one predicate thus they are doing a lot less work. To sum up, the difference in execution times can be attributed to the difference between: MONTH(BirthDate) == 1 && YearlyIncome <= 50000 and MONTH(BirthDate) == 1 In the first two dataflows YearlyIncome <= 50000 gets evaluated an average of 12.5 times for every row whereas in the third dataflow it is evaluated once and once only. Multiply those 11.5 extra operations by 4.7million rows and you get a significant amount of extra CPU cycles – that’s where our duration difference comes from. The Wrap-up The obvious point here is that adding new components to a dataflow isn’t necessarily going to make it go any slower, moreover you may be able to achieve significant improvements by splitting logic over multiple components rather than one. Performance tuning is all about reducing the amount of work that needs to be done and that doesn’t necessarily mean use less components, indeed sometimes you may be able to reduce workload in ways that aren’t immediately obvious as I think I have proven here. Of course there are many variables in play here and your mileage will most definitely vary. I encourage you to download the package and see if you get similar results – let me know in the comments. The package contains all three dataflows plus a fourth dataflow that will create the 2GB raw file for you (you will also need the [AdventureWorksDW2008] sample database from which to source the data); simply disable all dataflows except the one you want to test before executing the package and remember, execute using dtexec, not within BIDS. If you want to explore dataflow performance tuning in more detail then here are some links you might want to check out: Inequality joins, Asynchronous transformations and Lookups Destination Adapter Comparison Don’t turn the dataflow into a cursor SSIS Dataflow – Designing for performance (webinar) Any comments? Let me know! @Jamiet

    Read the article

  • How to specify file permission when putting a file using OpenSSH sftp command

    - by Adi Roiban
    I am using various SFTP clients for uploading files to an SFTP server and I have a problem with default permission used when putting files. When requesting to put a file, SFTP client like WinSCP or Filezilla will send the SSH_OPEN command without requesting any explicit file permission. On the other side, it looks like the OpenSSH sftp command on Linux (Red Hat and Ubuntu) is pending the SSH_OPEN command together with the '640' mode. How can I configure the OpenSSH command to not explictly set the file mode or how can I configure it to send a mode, other than 640? Many thanks! Update: I checked the OpenSSH sftp client source code and it looks like OpenSSH sftp will always tries to preserve file mode even if -P is not set: http://www.koders.com/c/fidD3B20680F615B33ACCB42398FAAFEE1C007DF942.aspx?s=rsa#L986 To solve this problem I used Putty SFTP client.

    Read the article

  • Static NAT in AWS's Virtual Private Cloud (VPC)

    - by user1050797
    Currently in a VPC with a public and a private subnet, all internet bound traffic from the private subnet could be routed via an NAT instance. The NAT instance will port address translate the packet's source IP to use the NAT instance's elastic IP, so the public server can reply to this public address. This is a PAT mechanism. My question is there a way for me to do a static NAT on my NAT instance -- Using the same NAT instance to static NAT an unassociated but reserved elastic IP to a private subnet host. This NAT instance will behave like a physical firewall doing static nat'ing for a bunch of private ip's.

    Read the article

  • How to enable mysqli extension on redhat?

    - by nuthan
    My php.ini says: Additional .ini files parsed /etc/php53/php.d/curl.ini, /etc/php53/php.d/fileinfo.ini, /etc/php53/php.d/json.ini, /etc/php53/php.d/mysqli.ini, /etc/php53/php.d/mysql.ini, /etc/php53/php.d/pdo_mysql.ini, /etc/php53/php.d/phar.ini, /etc/php53/php.d/zip.ini mysqli.ini is loaded.. But still i get this, PHP Fatal error: Class 'mysqli' not found i tried enabling dynamic loading and initialize php scripts with dl("mysqli.so"); i also tried recompiling the php source: ./configure --with-mysql=/usr/lib64/mysql --with- mysqli=/usr/lib64/mysql/mysql_config even this didn't work. Can anybody help me solve my problem? Red Hat Enterprise Linux Server release 5 (Tikanga). x86_64 GNU/Linux. No access to RHN. Thanks.

    Read the article

  • Using tshark to generate traffic logs every X seconds

    - by Sridhar Iyer
    I'm trying to use tshark to maintain a running history of all the packets that are going through an interface, for say 30 seconds. I want it to be human readable. This is a linux machine, and without mucking too much into the netstack source (which I can do if push comes to shove), I was wondering if I can use tshark to this. tshark has a -b duration:10 -b files:2 which I can use to generate a rotating set of 2 files, but I don't know which format it is printing the file in or how to read it.

    Read the article

  • Silverlight Cream for April 06, 2010 -- #832

    - by Dave Campbell
    In this Issue: Alex van Beek, Gill Cleeren, SilverlightShow, Michael Sync, Rénald Nollet, Charles Petzold, The-Oliver, and Max Paulousky. Shoutouts: Denislav Savkov of SilverlightShow ported his Slider control to WP7: Windows Phone 7 Series Sample Image Viewer SilverlightShow interview: The Silverlight Tour - what, where and why. Interview with one of the Tour organizers Laurent Duveau From SilverlightCream.com: Silverlight 4: using the VisualStateManager for state animations with MVVM Alex van Beek has an approach to resolving the MVVM issue of Animations without keeping a reference to the ViewModel by way of VisualStateManager Leveraging the ASP.NET Membership in Silverlight Gill Cleeren's post at SilverlightShow talks about using ASP.NET authentication inside your Silverlight making membership not only something you know and understand, but now the transition from your ASP.NET apps to Silverlight is simple as well. Windows Phone 7 Series RSS reader SilverlightShow has a demo RSS Reader for WP7 up... no text, but the code is there. Step by Step Tutorial : Installing Multi-Touch Simulator for Silverlight Phone 7 Michael Sync actually has a multi-touch simulator working for WP7 ... it involves a bunch of moving parts and one of the requirements is Windows 7, but if that works for you, this will too :) Element Property Binding Improvements in Blend 4 Beta and Visual Studio 2010 RC Rénald Nollet demonstrates new Blend and VS2010 features that assists you in Element Property binding with real examples. Projection Transforms Sans Math Charles Petzold is writing about Silverlight and 3D and specifically in this post 3D without math which becomes PlaneProjection... good long tutorial on it and code to back it all up. Daily Demo: Silverlight Install out of browser & Check for Update Behaviors The-Oliver has a post up about OOB and checking for updates using behaviors with only a slight change to your xaml... cool! Wizards. Prototype of sketching Wizard for WPF – 2 Max Paulousky has part 2 of his tutorial on a sketchflow Wizard for WPF ... yes WPF, but check it out... source too. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

< Previous Page | 598 599 600 601 602 603 604 605 606 607 608 609  | Next Page >