Search Results

Search found 45324 results on 1813 pages for 'open source'.

Page 670/1813 | < Previous Page | 666 667 668 669 670 671 672 673 674 675 676 677  | Next Page >

  • how to use gps receiver bu-353 in ubuntu 10.10

    - by Parimal
    Hi I have a gps receiver bu-353 with usb interface i want to know how can i use it under ubuntu I ran the following command gpsd -n -N -D 2 /dev/ttyUSB0 i got the output as: gpsd: launching (Version 2.94) gpsd: listening on port gpsd gpsd: running with effective group ID 1000 gpsd: running with effective user ID 1000 gpsd: opening GPS data source type 3 at '/dev/ttyUSB0' gpsd: speed 38400, 8N1 gpsd: Garmin: garmin_gps Linux USB module not active. gpsd: speed 9600, 8O1 gpsd: speed 38400, 8N1 gpsd: gpsd_activate(): opened GPS (fd 6) gpsd: speed 4800, 8N1 gpsd: NTPD ntpd_link_activate: 0 gpsd: /dev/ttyUSB0 identified as type SiRF binary (2.687608 sec @ 4800bps) gpsd: detaching 127.0.0.1 (sub 1, fd 8) in detach_client gpsd: detaching 127.0.0.1 (sub 1, fd 8) in detach_client after this i started tangoGPS, which said no gps and no gpsd found

    Read the article

  • Le futur noyau Linux gèrera le multicoeurs, parmis d'autres innovations

    Mise à jour du 07.04.2010 par Katleen Le futur noyau Linux gèrera le multicoeurs, parmis d'autres innovations On s'active dans le monde de l'open source. Plusieurs contributeurs travaillent sur le futur noyau Linux 2.6.34, qui pourrait permettre de grands pas en avant dans certains domaines. Déjà, pour les processeurs multicoeurs. Arnd Bergmann est en effet en train de faire sauter le verrou géant du kernel. Il n'est pour l'instant pas possible d'effectuer certaines opérations simultanément, ce qui réduit considérablement les performances des machines dotées de processeurs multicoeurs. Son correctif, supprimant les tâches bloquant l'ordinateur, serait prêt à être intégré au noyau Linux. ...

    Read the article

  • AMD/AMD GPU Switching

    - by user73816
    I have two GPU's in my laptop, both of which are AMD. Whenever I use the Catalyst Control Centre to change the GPU, nothing changes after reboot. In fact, when I do the fglrxinfo command the terminal only reports seeing one GPU, the integrated one (HD 4250). The dedicated one (Mobility HD5470) goes unnoticed and I can't seem to use that GPU at all. I really don't want to use the Open Source drivers because I've found there generally slower than the proprietary, but the proprietary doesn't seem to work either. Any help is appreciated.

    Read the article

  • Tuning GlassFish for Production

    - by arungupta
    The GlassFish distribution is optimized for developers and need simple deployment and server configuration changes to provide the performance typically required for production usage. The formal Performance Tuning Guide provides an explanation of capacity planning and tuning tips for application, GlassFish, JVM, and the operating system. The GlassFish Server Control (only with the commercial edition) also comes with Performance Tuner that optimizes the runtime for optimal throughput and scalability. And then there are multiple blogs that provide more insights as well: • Optimizing GlassFish for Production (Diego Silva, Mar 2012) • GlassFish Production Tuning (Vegard Skjefstad, Nov 2011) • GlassFish in Production (Sunny Saxena, Jul 2011) • Putting GlassFish v3 in Production: Essential Surviving Guide (JeanFrancois, Nov 2009) • A GlassFish Tuning Primer (Scott Oaks, Dec 2007) What is your favorite source for GlassFish Performance Tuning ?

    Read the article

  • Ban HTML comments from your pages and views

    Too many people dont realize that there are other options than <!-- --> comments to annotate HTML. These comments are harmful because they are sent to the client and thus make your page heavier than it needs to be. When doing ASP.NET, a simple drop-in replacement is server comments, which are delimited by <%-- --%> instead of <!-- -->. Those server comments are visible in your source code, but will never be rendered to the client. Heres a simple way to sanitize a web site. From...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How a batch file runs on a remote machine started by PSEXEC

    - by user38780
    I am having an issue running a Batch file on a remote machine suing PSEXEC. The file runs but does not run like it does when run through remote desktop. The batch runs a file which is a 32 bit application, which opens multiple 16bit applications, this should all run under one ntvdm.exe (In one Memory Space). Through remote desktop the batch file runs under the explorer process, and works correctly opening only one ntvdm.exe. Using PSEXEC the batch runs but not under the explorer process, a separate ntvdm.exe is open for each process. I found running the batch from explorer in PSEXEC works, but comes up with a "File Download - Security Warning" eg. psexec.exe" \compname -u username -p passowrd -s -d -i 0 explorer C:\Program.bat I want to be able to run the batch successfully without receiving warnings, it is a local warning and not a network share warning. Possible to recreate warning typing "explorer C:\windows\system32\cmd.exe" in Run I would like to know if anyone knows of a way to get PSEXEC to open the batch file to run as though it was started by explorer. Or a way of removing the local "File Download - Security Warning" Thanks

    Read the article

  • 'Development dashboard' web application

    - by espais
    Hi all, I am not sure if something like this exists in that it is ready out of the box. I currently have some web space that I use for various projects, and I would like to setup an area for some friends and I to develop web applications together. My ideal setup would be to create a folder, say, webdev.domain.com. We could all go to this domain, login, and then be able to setup new applications, pick which language will be used, setup database tables, allow HTML based file uploading, and create sub-folders to basically have a test bed for the applications. In retrospect, it seems like I'm describing a limited version of cpanel. I could come up with something in Drupal I'm sure, but I don't want to have to really spend time configuring much. Like I said, I want to install it and have minimal configuration. Does something like this exist (preferably in open-source)?

    Read the article

  • How to set a target as image [on hold]

    - by Zadalaxmi
    How to set a target as image in given code. public void addListenerForImage(final Image roomImage) { final DragAndDrop dragAndDrop = new DragAndDrop(); dragAndDrop.addSource(new DragAndDrop.Source(roomImage) { public DragAndDrop.Payload dragStart (InputEvent event, float x, float y, int pointer) { DragAndDrop.Payload payload = new DragAndDrop.Payload(); payload.setDragActor(roomImage); dragAndDrop.setDragActorPosition(-x, -y + roomImage.getHeight()); return payload; } public void dragStop (InputEvent event, float x, float y, int pointer,Target target) { roomImage.setBounds(50, 125, roomImage.getWidth(), roomImage.getHeight()); if(target != null) { roomImage.setPosition(target.getActor().getX(), target.getActor().getY()); } System.out.println(target); stage.addActor(roomImage); } }); My problem is i can drag the images and i am not able to set target as image; and target shows as null;One more if a invisible some of the images in group how can i test that it is overlapped or not;Please give some links and suggestion

    Read the article

  • Error opening hyperlinks in Excel 2003

    - by richardtallent
    When clicking to follow hyperlinks from Excel, I'm now getting this error: Unable to open http://blah... Cannot download the information you requested. The hyperlinks in the Excel file are created using the HYPERLINK() formula. I use Google Chrome as my default browser. The web site in question uses Basic Authentication, and I've entered correct credentials when prompted (the dialog looked like an IE auth box, not Chrome's, but it's always been that way, even when it was working properly). This hasn't been an issue until recently. I'm guessing our IT department made some lame change to IE's configuration that is causing Office to not be able to open the URLs, despite having Chrome as my browser. Things I've checked already: URLs are good, they work fine when pasted manually into Chrome, IE, or Firefox. IE is not set to Work Offline (already found that suggestion on Google). I checked Program Access and Defaults and verified that Chrome is selected. Nothing in the URL requires URLEncoding, so it's no goofy issue with encoding I've had reports from some other users now and then about the same problem, but this is the first time I've experienced it myself.

    Read the article

  • Managing Multiple dedicated servers centrally using a Web GUI tools?

    - by Sampath
    Application Architecture I am having a single ruby on rails application code running with multiple instances (ie. each client having identical sub domains) running on a multiple dedicated server using phusion passenger + nginx. sub domains setup done using vhost option in nginx passenger module. For Example server 1 serving 1 - 100 client with identical sub domains www.client1.product.com upto www.client100.product.com server 2 serving 101 - 200 client with identical sub domains www.client101.product.com upto www.client200.product.com server 3 serving 201 - 300 client with identical sub domains www.client201.product.com upto www.client300.product.com What my question is i need to centrally manage all my N dedicated servers using an gui tool I am looking for Web GUI tool to manage tasks like 1) backup all mysql databases automatically from all dedicated servers and send it to an some FTP backup drive 2) back files and folders from all dedicated servers and send it to an some FTP backup drive 3) need to manage firewall (CSF http://configserver.com/cp/csf.html) centrally for all dedicated servers 4) look to see server load , bandwidth used in graphical manner for all N no of dedicated servers Note: I am prefer to looking for an open source solution

    Read the article

  • Accessing network shares on Windows7 via SonicWall VPN client

    - by Jack Lloyd
    I'm running Windows7 x64 (fully patched) and the SonicWall 4.2.6.0305 client (64-bit, claims to support Windows7). I can login to the VPN and access network resources (eg SSH to a machine that lives behind the VPN). However I cannot seem to be able to access shared filesystems. Windows is refusing to do discovery on the VPN network. I suspect part of the problem is Windows persistently considers the VPN connection to be a 'public network'. Normally, you can open the network and sharing center and modify this setting, however it does not give me a choice for the VPN. So I did the expedient thing and turned on file sharing for public networks. I also disabled the Windows firewall for good measure. Still no luck. I can access the server directly by putting \\192.168.1.240 in the taskbar, which brings up the list of shares on the server. However, trying to open any of the shares simply tells me "Windows cannot access \\192.168.1.240\share You do not have permission to access ..."; it never asks for a domain password. I also tried Windows7 native VPN functionality - it couldn't successfully connect to the VPN at all. I suspect this is because SonicWall is using some obnoxious special/undocumented authentication system; I had similar problems trying to connect on Linux with the normal IPsec tools there. What magical invocation or control panel option am I missing that will let this work? Are there any reasonable debugging strategies? I'm feeling quite frustrated at Windows tendency to not give me much useful information that might let me understand what it is trying to do and what is going wrong.

    Read the article

  • TechDays 2010 Portugal - The Day After

    - by Ricardo Peres
    Well, TechDays 2010 Portugal is over, time for a balance. I really enjoyed being a speaker, although my presentation took a lot more time than it should, it was gratifying to see so many people staying until the end. Lots of subjects were left behind, though. My presentation is available at my SkyDrive, here. Soon I will place there the source code, too. I would like to know if you've been there, and, if so, what do you think of my presentation! Feel free to send your thoughts, whatever they are. On the other hand, I saw some really interesting presentations, to name a few, from Nuno Antunes, Nuno Godinho, Filipe Prezado, Nuno Silva and my friend André Lage. I also had the chance to finally meet Caio Proiete and Pedro Perfeito. Perhaps we'll meet again at TechDays Remix, who knows.

    Read the article

  • Favorite Visual Studio 2010 Extensions, Update

    - by Scott Dorman
    With the release of the Visual Studio Pro Power Tools (and many other new extensions having been released), my list of favorite Visual Studio extensions has changed. All of these extensions are available in the Visual Studio Gallery. Here is the list of extensions that I currently have installed and find useful: Bing Start Page CodeCompare Collapse Selection In Solution Explorer Collapse Solution Color Picker Completion Extension Analyzer Find Results Highlighter Find Results Tweak (Available from CodePlex) Format Document HelpViewerKeywordIndex HighlightMultiWord Image Insertion Indentation Matcher Extension ItalicComments MoveToRegionVSX Numbered Bookmarks PowerCommands for Visual Studio 2010 Regular Expressions Margin Search Work Items for TFS 2010 Source Outliner Spell Checker Structure Adornment This also installs the following extensions: BlockTagger BlockTaggerImpl SettingsStore SettingsStoreImpl StyleCop Team Founder Server Power Tools TFS Auto Shelve Visual Studio Color Theme Editor Visual Studio Pro Power Tools VS10x Code Map VS10x Code Marker VS10x Collapse All Projects VS10x Editor View Enhancer VS10x Insert Debug Names VS10x Selection Popup VS10x Super Copy Paste VSCommands 2010 Word Wrap with Auto-Indent   Technorati Tags: Visual Studio,Extensions

    Read the article

  • EEE PC Keyboard malfunctioning - Ctrl key "sticks" after 10 seconds

    - by DWilliams
    I was given a EEE PC belonging to a friend of a friend to fix. The keyboard did not appear to work at all. I spent a while testing out various things, blowing the keyboard out, checking for damage, and so on. Nothing appeared to be physically wrong. At first I noticed that the keyboard appeared to work just fine for 10 seconds (on average, sometimes more sometimes less) after being powered on. It had been restored to the factory default xandros installation with no user set up, so I couldn't get in to mess with things since I couldn't type to make a user. I made an ubuntu live USB to boot it from, and managed to get the boot order changed to boot from USB in the ~10 seconds of working keyboard I had (I don't think I've ever had to rush around BIOS menus that quickly). After I got Ubuntu up on it, I played around a bit more and determined that apparently the ctrl key is stuck down (not literally, but it's on all the time). If I open gedit, pressing the "o" key brings the open dialog, "s" opens the save dialog, and all other behaviour you would expect to see if you were holding down the control key. The only exception that I noticed is the "9" and "0" keys. They function normally. Figuring that out I made a xandros user with a name/password consisting of 9's and 0's. I couldn't find any options in Xandros that could potentially be helpful. I'm not familiar with EEE PCs. Is it safe to assume that the keyboard is simply dead or could there be another problem? I don't want to purchase another keyboard for him if that isn't going to fix the problem. The netbook doesn't show any obvious signs of damage but the owner is a biker and very often has it with him on the road so it's been subjected to a good bit of vibration.

    Read the article

  • Do Not Optimize Without Measuring

    - by Alois Kraus
    Recently I had to do some performance work which included reading a lot of code. It is fascinating with what ideas people come up to solve a problem. Especially when there is no problem. When you look at other peoples code you will not be able to tell if it is well performing or not by reading it. You need to execute it with some sort of tracing or even better under a profiler. The first rule of the performance club is not to think and then to optimize but to measure, think and then optimize. The second rule is to do this do this in a loop to prevent slipping in bad things for too long into your code base. If you skip for some reason the measure step and optimize directly it is like changing the wave function in quantum mechanics. This has no observable effect in our world since it does represent only a probability distribution of all possible values. In quantum mechanics you need to let the wave function collapse to a single value. A collapsed wave function has therefore not many but one distinct value. This is what we physicists call a measurement. If you optimize your application without measuring it you are just changing the probability distribution of your potential performance values. Which performance your application actually has is still unknown. You only know that it will be within a specific range with a certain probability. As usual there are unlikely values within your distribution like a startup time of 20 minutes which should only happen once in 100 000 years. 100 000 years are a very short time when the first customer tries your heavily distributed networking application to run over a slow WIFI network… What is the point of this? Every programmer/architect has a mental performance model in his head. A model has always a set of explicit preconditions and a lot more implicit assumptions baked into it. When the model is good it will help you to think of good designs but it can also be the source of problems. In real world systems not all assumptions of your performance model (implicit or explicit) hold true any longer. The only way to connect your performance model and the real world is to measure it. In the WIFI example the model did assume a low latency high bandwidth LAN connection. If this assumption becomes wrong the system did have a drastic change in startup time. Lets look at a example. Lets assume we want to cache some expensive UI resource like fonts objects. For this undertaking we do create a Cache class with the UI themes we want to support. Since Fonts are expensive objects we do create it on demand the first time the theme is requested. A simple example of a Theme cache might look like this: using System; using System.Collections.Generic; using System.Drawing; struct Theme { public Color Color; public Font Font; } static class ThemeCache { static Dictionary<string, Theme> _Cache = new Dictionary<string, Theme> { {"Default", new Theme { Color = Color.AliceBlue }}, {"Theme12", new Theme { Color = Color.Aqua }}, }; public static Theme Get(string theme) { Theme cached = _Cache[theme]; if (cached.Font == null) { Console.WriteLine("Creating new font"); cached.Font = new Font("Arial", 8); } return cached; } } class Program { static void Main(string[] args) { Theme item = ThemeCache.Get("Theme12"); item = ThemeCache.Get("Theme12"); } } This cache does create font objects only once since on first retrieve of the Theme object the font is added to the Theme object. When we let the application run it should print “Creating new font” only once. Right? Wrong! The vigilant readers have spotted the issue already. The creator of this cache class wanted to get maximum performance. So he decided that the Theme object should be a value type (struct) to not put too much pressure on the garbage collector. The code Theme cached = _Cache[theme]; if (cached.Font == null) { Console.WriteLine("Creating new font"); cached.Font = new Font("Arial", 8); } does work with a copy of the value stored in the dictionary. This means we do mutate a copy of the Theme object and return it to our caller. But the original Theme object in the dictionary will have always null for the Font field! The solution is to change the declaration of struct Theme to class Theme or to update the theme object in the dictionary. Our cache as it is currently is actually a non caching cache. The funny thing was that I found out with a profiler by looking at which objects where finalized. I found way too many font objects to be finalized. After a bit debugging I found the allocation source for Font objects was this cache. Since this cache was there for years it means that the cache was never needed since I found no perf issue due to the creation of font objects. the cache was never profiled if it did bring any performance gain. to make the cache beneficial it needs to be accessed much more often. That was the story of the non caching cache. Next time I will write something something about measuring.

    Read the article

  • Mysql my.cnf as simbolic link in Ubuntu 12.04

    - by Juan Cruz
    I am not able to use symlink for my.cnf file (Ubuntu 12.04 server). I added the alias to /etc/apparmor.d/tunables/alias file (as I did for 10.04 and worked) but I get: May 30 16:00:01 ip-10-242-209-203 kernel: [176926.213403] type=1400 audit(1338393601.350:244): apparmor="DENIED" operation="open" parent=1 profile="/usr/sbin/mysqld" name="/opt/data/my.cnf" pid=18128 comm="mysqld" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 May 30 16:00:01 ip-10-242-209-203 kernel: [176926.222016] init: mysql main process (18128) terminated with status 1 May 30 16:00:01 ip-10-242-209-203 kernel: [176926.222084] init: mysql respawning too fast, stopped As a workaround I added the following line /etc/mysql/my.cnf r, to the /etc/apparmor.d/local/usr.sbin.mysqld file. The default configuration is /etc/mysql/*.cnf r, Is this a bug? is an apparmor bug or a mysql bug? It seems that that configuration has changed since MySql 5.1 (https://bugs.launchpad.net/ubuntu/+source/mysql-5.1/+bug/619172) but now worked for me. Thanks!

    Read the article

  • The Breakpoint Ep 3: The Sourcemap Spectacular with Paul Irish and Addy Osmani

    The Breakpoint Ep 3: The Sourcemap Spectacular with Paul Irish and Addy Osmani Ask and vote for questions at goo.gl Take Coffeescript to Javascript to Minified and all the way back with source maps. In addition to a new Coffeescript sourcemap workflow, we'll cover the latest sourcemap updates so you can understand how to dramatically improve your debugging experience. Finally, Paul and Addy will be joined by special guest—Yeoman core contributor Sindre Sorhus—to discuss what big new changes are coming to the project. From: GoogleDevelopers Views: 0 0 ratings Time: 01:00:00 More in Science & Technology

    Read the article

  • What are the legal risks if any of using a GPL or AGPL Web Application Framework/CMS?

    - by Seth Spearman
    Tried to ask this on SO but was referred here... Am I correct in saying that using a GPL'ed web application framework such as Composite C1 would NOT obligate a company to share the source code we write against said framework? That is the purpose of the AGPL, am I correct? Does this also apply to Javascript frameworks like KendoUI? The GPL would require any changes that we make to the framework be made available to others if we were to offer it for download. In other words, merely loading a web sites content into my browser is not "conveying" or "distributing" that software. I have been arguing that we should avoid GPL web frameworks and now after researching I am pretty sure I am wrong but wanted to get other opinions? Seth

    Read the article

  • Sources (other than tutorials) on Game Mechanics

    - by Holland
    But, I'm not quite sure where I should start from here. I know I have to go and grab an engine to use with some prebuilt libraries, and then from there learn how to actually code a game, etc. All I have right now is some "program Tetris" tutorial for C++ open right now, but I'm not even sure if that will really help me with what I want to accomplish. I'm curious if there are is any good C++ documentation related to game development which provides information on building a game in more of a component model (by this I'm referring to the documentation, not the actual object-oriented design of the game itself), rather than an entire tutorial designed to do something specific. This could include information based on various design methodologies, or how to link hardware with OpenGL interfaces, or just simply even learning how to render 2D images on a canvas. I suppose this place is definitely a good source :P, but what I'm looking for is quite a bit of information - and I think posting a new question every ten minutes would just flood the site...

    Read the article

  • SQL SERVER – SSIS Parameters in Parent-Child ETL Architectures – Notes from the Field #040

    - by Pinal Dave
    [Notes from Pinal]: SSIS is very well explored subject, however, there are so many interesting elements when we read, we learn something new. A similar concept has been Parent-Child ETL architecture’s relationship in SSIS. Linchpin People are database coaches and wellness experts for a data driven world. In this 40th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to understand SSIS Parameters in Parent-Child ETL Architectures. In this brief Notes from the Field post, I will review the use of SSIS parameters in parent-child ETL architectures. A very common design pattern used in SQL Server Integration Services is one I call the parent-child pattern.  Simply put, this is a pattern in which packages are executed by other packages.  An ETL infrastructure built using small, single-purpose packages is very often easier to develop, debug, and troubleshoot than large, monolithic packages.  For a more in-depth look at parent-child architectures, check out my earlier blog post on this topic. When using the parent-child design pattern, you will frequently need to pass values from the calling (parent) package to the called (child) package.  In older versions of SSIS, this process was possible but not necessarily simple.  When using SSIS 2005 or 2008, or even when using SSIS 2012 or 2014 in package deployment mode, you would have to create package configurations to pass values from parent to child packages.  Package configurations, while effective, were not the easiest tool to work with.  Fortunately, starting with SSIS in SQL Server 2012, you can now use package parameters for this purpose. In the example I will use for this demonstration, I’ll create two packages: one intended for use as a child package, and the other configured to execute said child package.  In the parent package I’m going to build a for each loop container in SSIS, and use package parameters to pass in a value – specifically, a ClientID – for each iteration of the loop.  The child package will be executed from within the for each loop, and will create one output file for each client, with the source query and filename dependent on the ClientID received from the parent package. Configuring the Child and Parent Packages When you create a new package, you’ll see the Parameters tab at the package level.  Clicking over to that tab allows you to add, edit, or delete package parameters. As shown above, the sample package has two parameters.  Note that I’ve set the name, data type, and default value for each of these.  Also note the column entitled Required: this allows me to specify whether the parameter value is optional (the default behavior) or required for package execution.  In this example, I have one parameter that is required, and the other is not. Let’s shift over to the parent package briefly, and demonstrate how to supply values to these parameters in the child package.  Using the execute package task, you can easily map variable values in the parent package to parameters in the child package. The execute package task in the parent package, shown above, has the variable vThisClient from the parent package mapped to the pClientID parameter shown earlier in the child package.  Note that there is no value mapped to the child package parameter named pOutputFolder.  Since this parameter has the Required property set to False, we don’t have to specify a value for it, which will cause that parameter to use the default value we supplied when designing the child pacakge. The last step in the parent package is to create the for each loop container I mentioned earlier, and place the execute package task inside it.  I’m using an object variable to store the distinct client ID values, and I use that as the iterator for the loop (I describe how to do this more in depth here).  For each iteration of the loop, a different client ID value will be passed into the child package parameter. The final step is to configure the child package to actually do something meaningful with the parameter values passed into it.  In this case, I’ve modified the OleDB source query to use the pClientID value in the WHERE clause of the query to restrict results for each iteration to a single client’s data.  Additionally, I’ll use both the pClientID and pOutputFolder parameters to dynamically build the output filename. As shown, the pClientID is used in the WHERE clause, so we only get the current client’s invoices for each iteration of the loop. For the flat file connection, I’m setting the Connection String property using an expression that engages both of the parameters for this package, as shown above. Parting Thoughts There are many uses for package parameters beyond a simple parent-child design pattern.  For example, you can create standalone packages (those not intended to be used as a child package) and still use parameters.  Parameter values may be supplied to a package directly at runtime by a SQL Server Agent job, through the command line (via dtexec.exe), or through T-SQL. Also, you can also have project parameters as well as package parameters.  Project parameters work in much the same way as package parameters, but the parameters apply to all packages in a project, not just a single package. Conclusion Of the numerous advantages of using catalog deployment model in SSIS 2012 and beyond, package parameters are near the top of the list.  Parameters allow you to easily share values from parent to child packages, enabling more dynamic behavior and better code encapsulation. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • dns hosting - url forwarding - hiding forwarded url?

    - by jeremycollins
    I have free dns hosting with the domain registrar and I'd like the dns hosted domain www.example.com to display contents of www.myotherlongdomain.com. I only have 301/302/iframe forwarding options, however I want to mask the redirected (longdomain) url. If I use frames, users can view the source and see the (longdomain) url the contents are coming from. How can I hide it so it always displays www.example.com? There is no cloaking/masking option with the registrar. Thanks.

    Read the article

  • Mplayer no sound when playing some movies

    - by Ivan Peevski
    Ok, that's a bit of a strange problem, that somehow crept into my system. It used to work fine. Here is the problem as far as I can identify it. When I try to play certain video files with mplayer, there is no sound. As far as I can tell, it is only an issue with ac3 and dts sound tracks (using the ffmpeg decoder). Mplayer says: ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders AUDIO: 48000 Hz, 6 ch, s16le, 1536.0 kbit/33.33% (ratio: 192000->576000) Selected audio codec: [ffdca] afm: ffmpeg (FFmpeg DTS) ========================================================================== [AO_ALSA] Playback open error: Device or resource busy Failed to initialize audio driver 'alsa' Could not open/initialize audio device -> no sound. Audio: no sound (similar with ac3 sound, but using the ffac3 audio codec). Trying different audio output (-ao oss/pcm/sdl) doesn't fix the problem. The strange thing is that if I play these files directly with ffplay, they work fine. mplayer sound with mp3/ogg is fine My alsa configuration is standard (no /etc/asound.conf or ~/.asound*) OS: Linux Gentoo Mplayer: 1.0_rc4_p20100213 (SVN-r30554-4.3.4) FFMpeg: 0.5_p20601-r1 (SVN-r20601) Any other information I can provide?

    Read the article

  • Analysing and measuring the performance of a .NET application (survey results)

    - by Laila
    Back in December last year, I asked myself: could it be that .NET developers think that you need three days and a PhD to do performance profiling on their code? What if developers are shunning profilers because they perceive them as too complex to use? If so, then what method do they use to measure and analyse the performance of their .NET applications? Do they even care about performance? So, a few weeks ago, I decided to get a 1-minute survey up and running in the hopes that some good, hard data would clear the matter up once and for all. I posted the survey on Simple Talk and got help from a few people to promote it. The survey consisted of 3 simple questions: Amazingly, 533 developers took the time to respond - which means I had enough data to get representative results! So before I go any further, I would like to thank all of you who contributed, because I now have some pretty good answers to the troubling questions I was asking myself. To thank you properly, I thought I would share some of the results with you. First of all, application performance is indeed important to most of you. In fact, performance is an intrinsic part of the development cycle for a good 40% of you, which is much higher than I had anticipated, I have to admit. (I know, "Have a little faith Laila!") When asked what tool you use to measure and analyse application performance, I found that nearly half of the respondents use logging statements, a third use performance counters, and 70% of respondents use a profiler of some sort (a 3rd party performance profilers, the CLR profiler or the Visual Studio profiler). The importance attributed to logging statements did surprise me a little. I am still not sure why somebody would go to the trouble of manually instrumenting code in order to measure its performance, instead of just using a profiler. I personally find the process of annotating code, calculating times from log files, and relating it all back to your source terrifyingly laborious. Not to mention that you then need to remember to turn it all off later! Even when you have logging in place throughout all your code anyway, you still have a fair amount of potentially error-prone calculation to sift through the results; in addition, you'll only get method-level rather than line-level timings, and you won't get timings from any framework or library methods you don't have source for. To top it all, we all know that bottlenecks are rarely where you would expect them to be, so you could be wasting time looking for a performance problem in the wrong place. On the other hand, profilers do all the work for you: they automatically collect the CPU and wall-clock timings, and present the results from method timing all the way down to individual lines of code. Maybe I'm missing a trick. I would love to know about the types of scenarios where you actively prefer to use logging statements. Finally, while a third of the respondents didn't have a strong opinion about code performance profilers, those who had an opinion thought that they were mainly complex to use and time consuming. Three respondents in particular summarised this perfectly: "sometimes, they are rather complex to use, adding an additional time-sink to the process of trying to resolve the existing problem". "they are simple to use, but the results are hard to understand" "Complex to find the more advanced things, easy to find some low hanging fruit". These results confirmed my suspicions: Profilers are seen to be designed for more advanced users who can use them effectively and make sense of the results. I found yet more interesting information when I started comparing samples of "developers for whom performance is an important part of the dev cycle", with those "to whom performance is only looked at in times of crisis", and "developers to whom performance is not important, as long as the app works". See the three graphs below. Sample of developers to whom performance is an important part of the dev cycle: Sample of developers to whom performance is important only in times of crisis: Sample of developers to whom performance is not important, as long as the app works: As you can see, there is a strong correlation between the usage of a profiler and the importance attributed to performance: indeed, the more important performance is to a development team, the more likely they are to use a profiler. In addition, developers to whom performance is an important part of the dev cycle have a higher tendency to use a much wider range of methods for performance measurement and analysis. And, unsurprisingly, the less important performance is, the less varied the methods of measurement are. So all in all, to come back to my random questions: .NET developers do care about performance. Those who care the most use a wider range of performance measurement methods than those who care less. But overall, logging statements, performance counters and third party performance profilers are the performance measurement methods of choice for most developers. Finally, although most of you find code profilers complex to use, those of you who care the most about performance tend to use profilers more than those of you to whom performance is not so important.

    Read the article

  • Tool to identify potential reviewers for a proposed change

    - by Lorin Hochstein
    Is there a tool that takes as input a proposed patch and a git repository, and identifies the developers are the best candidates for reviewing the patch? It would use the git history to identify the authors that have the most experience with the files / sections of code that are being changed. Edit: The use case is a large open source project (OpenStack Compute), where merge proposals come in, and I see a merge proposal on a chunk of code I'm not familiar with, and I want to add somebody else's name to the list of suggested reviewers so that person gets a notification to look at the merge proposal.

    Read the article

  • Default browser hangs

    - by Craig Hinrichs
    Intermittent hangs would occur when I would use Internet Explorer to open a new main page or new tab to a site I know would be up. The browser would open and say "Waiting for site example.com" and do nothing more. If I closed the window and reopened it it would immediately connect. Over time I would have to close and reopen the window to get to the page. This would happen to any page, including Google. Got sick of it and started using Chrome. I recently upgraded my anti-virus and am now experiencing the same issue with Chrome. I use AVG for my antivirus. Empirically it seems that if I don't make Chrome my default browser I don't experience the issue. I tested this theory for over two hours yesterday. Possible issues I have found this could be but not confirmed yet: MTU settings are not correct. I am infected but my antivirus has not caught it (unlikely but possible) ?? I would like to think this is related to my antivirus but I am unsure how to verify. I don't like the idea of killing my antivirus if #2 is a possibility. I am looking for tips on how I can troubleshoot possible issues.

    Read the article

< Previous Page | 666 667 668 669 670 671 672 673 674 675 676 677  | Next Page >