Search Results

Search found 40998 results on 1640 pages for 'setup project'.

Page 948/1640 | < Previous Page | 944 945 946 947 948 949 950 951 952 953 954 955  | Next Page >

  • How to add a disclaimer to forwarded messages to outside domains in Exchange 2013?

    - by Vinícius Ferrão
    I would like to implement some kind of filter to add a disclaimer message within emails forwarded to outside domains. Today we have some users that setup filters to forward messages to external mail servers, as example @gmail addresses. So this kind of forward should be marked with the disclaimer message. Not the normal fwd messages. We have a Postfix mailfiltering gateway too, if it's simpler to implement this on the mail filter, it could be a viable option. What would be the best approach to handle this issue? Thanks,

    Read the article

  • How to display workflow related tasks in the item display page where the workflow is currently running on in SharePoint2013

    - by ybbest
    In one of the project, I need to display workflow related tasks in the item display page where the workflow is currently running on. To achieve this, I’d like to add the tasks list view web part and using the connected web part to achieve this.(ID=workflowitemid) However, to make it work I need to unhide the workflowitemid field in the task list, as it is hidden field and also cantogglehidden field is set to false. I need to use reflection to change the cantogglehidden field to true as it only has getter in the API and then I am able to unhide the field. You can download the script here. However, it is not ideal (make your environment not supported by Microsoft) to display tasks this way. Another way to display the related task is to use SharePoint designer solution with List view web part and data source. Here are the steps. 1. Create a new list display form as below 2. Edit the custom display form in advanced mode. 3. Find the PlaceHolderMain contentplace hoder and insert the DataView by choosing the associated workflow tasks list as below 4. Go to the List View Tools >> OPTIONS 5. Create a Parameter called workflowitemId Parameter which retrieve the value from the ID querystring as below 6. Create a filter based on UIVersion = workflowitemId as below ,we are going to change the UIVersion to WorkflowItemId property later as WorkflowItemId is a hidden field and cannot be selected from the wizard. 7. Replace UIVersion with WorkflowItemId in the caml for the XsltListViewWebPart. From: TO 8. Go to the new custom display page at http://yourserver/Lists/aa/CustomDisplayPage.aspx?ID=414, you will see the associated tasks are showing in the page. References: http://office.microsoft.com/en-us/sharepoint-designer-help/watch-this-design-a-document-review-workflow-solution-HA010256417.aspx (Video 12 and 13)

    Read the article

  • The Red Gate and .NET Reflector Debacle

    - by Rick Strahl
    About a month ago Red Gate – the company who owns the NET Reflector tool most .NET devs use at one point or another – decided to change their business model for Reflector and take the product from free to a fully paid for license model. As a bit of history: .NET Reflector was originally created by Lutz Roeder as a free community tool to inspect .NET assemblies. Using Reflector you can examine the types in an assembly, drill into type signatures and quickly disassemble code to see how a particular method works.  In case you’ve been living under a rock and you’ve never looked at Reflector, here’s what it looks like drilled into an assembly from disk with some disassembled source code showing: Note that you get tons of information about each element in the tree, and almost all related types and members are clickable both in the list and source view so it’s extremely easy to navigate and follow the code flow even in this static assembly only view. For many year’s Lutz kept the the tool up to date and added more features gradually improving an already amazing tool and making it better. Then about two and a half years ago Red Gate bought the tool from Lutz. A lot of ruckus and noise ensued in the community back then about what would happen with the tool and… for the most part very little did. Other than the incessant update notices with prominent Red Gate promo on them life with Reflector went on. The product didn’t die and and it didn’t go commercial or to a charge model. When .NET 4.0 came out it still continued to work mostly because the .NET feature set doesn’t drastically change how types behave.  Then a month back Red Gate started making noise about a new Version Version 7 which would be commercial. No more free version - and a shit storm broke out in the community. Now normally I’m not one to be critical of companies trying to make money from a product, much less for a product that’s as incredibly useful as Reflector. There isn’t day in .NET development that goes by for me where I don’t fire up Reflector. Whether it’s for examining the innards of the .NET Framework, checking out third party code, or verifying some of my own code and resources. Even more so recently I’ve been doing a lot of Interop work with a non-.NET application that needs to access .NET components and Reflector has been immensely valuable to me (and my clients) if figuring out exact type signatures required to calling .NET components in assemblies. In short Reflector is an invaluable tool to me. Ok, so what’s the problem? Why all the fuss? Certainly the $39 Red Gate is trying to charge isn’t going to kill any developer. If there’s any tool in .NET that’s worth $39 it’s Reflector, right? Right, but that’s not the problem here. The problem is how Red Gate went about moving the product to commercial which borders on the downright bizarre. It’s almost as if somebody in management wrote a slogan: “How can we piss off the .NET community in the most painful way we can?” And that it seems Red Gate has a utterly succeeded. People are rabid, and for once I think that this outrage isn’t exactly misplaced. Take a look at the message thread that Red Gate dedicated from a link off the download page. Not only is Version 7 going to be a paid commercial tool, but the older versions of Reflector won’t be available any longer. Not only that but older versions that are already in use also will continually try to update themselves to the new paid version – which when installed will then expire unless registered properly. There have also been reports of Version 6 installs shutting themselves down and failing to work if the update is refused (I haven’t seen that myself so not sure if that’s true). In other words Red Gate is trying to make damn sure they’re getting your money if you attempt to use Reflector. There’s a lot of temptation there. Think about the millions of .NET developers out there and all of them possibly upgrading – that’s a nice chunk of change that Red Gate’s sitting on. Even with all the community backlash these guys are probably making some bank right now just because people need to get life to move on. Red Gate also put up a Feedback link on the download page – which not surprisingly is chock full with hate mail condemning the move. Oddly there’s not a single response to any of those messages by the Red Gate folks except when it concerns license questions for the full version. It puzzles me what that link serves for other yet than another complete example of failure to understand how to handle customer relations. There’s no doubt that that all of this has caused some serious outrage in the community. The sad part though is that this could have been handled so much less arrogantly and without pissing off the entire community and causing so much ill-will. People are pissed off and I have no doubt that this negative publicity will show up in the sales numbers for their other products. I certainly hope so. Stupidity ought to be painful! Why do Companies do boneheaded stuff like this? Red Gate’s original decision to buy Reflector was hotly debated but at that the time most of what would happen was mostly speculation. But I thought it was a smart move for any company that is in need of spreading its marketing message and corporate image as a vendor in the .NET space. Where else do you get to flash your corporate logo to hordes of .NET developers on a regular basis?  Exploiting that marketing with some goodwill of providing a free tool breeds positive feedback that hopefully has a good effect on the company’s visibility and the products it sells. Instead Red Gate seems to have taken exactly the opposite tack of corporate bullying to try to make a quick buck – and in the process ruined any community goodwill that might have come from providing a service community for free while still getting valuable marketing. What’s so puzzling about this boneheaded escapade is that the company doesn’t need to resort to underhanded tactics like what they are trying with Reflector 7. The tools the company makes are very good. I personally use SQL Compare, Sql Data Compare and ANTS Profiler on a regular basis and all of these tools are essential in my toolbox. They certainly work much better than the tools that are in the box with Visual Studio. Chances are that if Reflector 7 added useful features I would have been more than happy to shell out my $39 to upgrade when the time is right. It’s Expensive to give away stuff for Free At the same time, this episode shows some of the big problems that come with ‘free’ tools. A lot of organizations are realizing that giving stuff away for free is actually quite expensive and the pay back is often very intangible if any at all. Those that rely on donations or other voluntary compensation find that they amount contributed is absolutely miniscule as to not matter at all. Yet at the same time I bet most of those clamouring the loudest on that Red Gate Reflector feedback page that Reflector won’t be free anymore probably have NEVER made a donation to any open source project or free tool ever. The expectation of Free these days is just too great – which is a shame I think. There’s a lot to be said for paid software and having somebody to hold to responsible to because you gave them some money. There’s an incentive –> payback –> responsibility model that seems to be missing from free software (not all of it, but a lot of it). While there certainly are plenty of bad apples in paid software as well, money tends to be a good motivator for people to continue working and improving products. Reasons for giving away stuff are many but often it’s a naïve desire to share things when things are simple. At first it might be no problem to volunteer time and effort but as products mature the fun goes out of it, and as the reality of product maintenance kicks in developers want to get something back for the time and effort they’re putting in doing non-glamorous work. It’s then when products die or languish and this is painful for all to watch. For Red Gate however, I think there was always a pretty good payback from the Reflector acquisition in terms of marketing: Visibility and possible positioning of their products although they seemed to have mostly ignored that option. On the other hand they started this off pretty badly even 2 and a half years back when they aquired Reflector from Lutz with the same arrogant attitude that is evident in the latest episode. You really gotta wonder what folks are thinking in management – the sad part is from advance emails that were circulating, they were fully aware of the shit storm they were inciting with this and I suspect they are banking on the sheer numbers of .NET developers to still make them a tidy chunk of change from upgrades… Alternatives are coming For me personally the single license isn’t a problem, but I actually have a tool that I sell (an interop Web Service proxy generation tool) to customers and one of the things I recommend to use with has been Reflector to view assembly information and to find which Interop classes to instantiate from the non-.NET environment. It’s been nice to use Reflector for this with its small footprint and zero-configuration installation. But now with V7 becoming a paid tool that option is not going to be available anymore. Luckily it looks like the .NET community is jumping to it and trying to fill the void. Amidst the Red Gate outrage a new library called ILSpy has sprung up and providing at least some of the core functionality of Reflector with an open source library. It looks promising going forward and I suspect there will be a lot more support and interest to support this project now that Reflector has gone over to the ‘dark side’…© Rick Strahl, West Wind Technologies, 2005-2011

    Read the article

  • Boot Vista x64 with both ICH8 and ICH10 AHCI support

    - by adurity
    I have a situation where I need to boot Windows Vista 64-bit from both a ICH10 and ICH8 AHCI SATA controller. Currently, it is setup to boot from the ICH10, but when I try booting with the ICH8, I get the famed Windows STOP 7B BSOD. How can I add the ICH8 driver so that I can work around this BSOD and boot the system? I have updated to the latest Intel AHCI driver (8.9.0.1023 as of this post) which is supposed to support both chipsets, but I feel I am missing something.

    Read the article

  • Future of BizTalk

    - by Vamsi Krishna
    The future of BizTalk- The last TechEd Conference was a very important one from BizTalk perspective. Microsoft will continue innovating BizTalk and releasing BizTalk versions for “for next few years to come”. So, all the investments that clients have made so far will continue giving returns. Three flavors of BizTalk: BizTalk On-Premise, BizTalk as IaaS and BizTalk as PaaS (Azure Service Bus).Windows Azure IaaS and How It WorksDate: June 12, 2012Speakers: Corey SandersThis session covers the significant investments that Microsoft is making in our Windows Azure Infrastructure-as-a-Service (IaaS) solution and how it http://channel9.msdn.com/Events/TechEd/NorthAmerica/2012/AZR201 TechEd provided two sessions around BizTalk futures:http://channel9.msdn.com/Events/TechEd/NorthAmerica/2012AZR 207: Application Integration Futures - The Road Map and What's Next on Windows Azure http://channel9.msdn.com/Events/TechEd/NorthAmerica/2012/AZR207AZR211: Building Integration Solutions Using Microsoft BizTalk On-Premises and on Windows Azurehttp://channel9.msdn.com/Events/TechEd/NorthAmerica/2012/AZR211Here is are some highlights from the two sessions at TechEd. Bala Sriram, Director of Development for BizTalk provided the introduction at the road map session and covered some key points around BizTalk:More then 12,000 Customers 79% of BizTalk users have already adopted BizTalk Server 2010 BizTalk Server will be released for “years to come” after this release. I.e. There will be more releases of BizTalk after BizTalk 2010 R2. BizTalk 2010 R2 will be releasing 6 months after Windows 8 New Azure (Cloud-based) BizTalk scenarios will be available in IaaS and PaaS BizTalk Server on-premises, IaaS, and PaaS are complimentary and designed to work together BizTalk Investments will be taken forward The second session was mainly around drilling in and demonstrating the up and coming capabilities in BizTalk Server on-premise, IaaS, and PaaS:BizTalk IaaS: Users will be able to bring their own or choose from a Azure IaaS template to quickly provision BizTalk Server virtual machines (VHDs) BizTalk Server 2010 R2: Native adapter to connect to Azure Service Bus Queues, Topics and Relay. Native send port adapter for REST. Demonstrated this in connecting to Salesforce to get and update Salesforce information. ESB Toolkit will be incorporate are part of product and setup BizTalk PaaS: HTTP, FTP, Service Bus, LOB Application connectivity XML and Flat File processing Message properties Transformation, Archiving, Tracking Content based routing

    Read the article

  • Get Started using Build-Deploy-Test Workflow with TFS 2012

    - by Jakob Ehn
    TFS 2012 introduces a new type of Lab environment called Standard Environment. This allows you to setup a full Build Deploy Test (BDT) workflow that will build your application, deploy it to your target machine(s) and then run a set of tests on that server to verify the deployment. In TFS 2010, you had to use System Center Virtual Machine Manager and involve half of your IT department to get going. Now all you need is a server (virtual or physical) where you want to deploy and test your application. You don’t even have to install a test agent on the machine, TFS 2012 will do this for you! Although each step is rather simple, the entire process of setting it up consists of a bunch of steps. So I thought that it could be useful to run through a typical setup.I will also link to some good guidance from MSDN on each topic. High Level Steps Install and configure Visual Studio 2012 Test Controller on Target Server Create Standard Environment Create Test Plan with Test Case Run Test Case Create Coded UI Test from Test Case Associate Coded UI Test with Test Case Create Build Definition using LabDefaultTemplate 1. Install and Configure Visual Studio 2012 Test Controller on Target Server First of all, note that you do not have to have the Test Controller running on the target server. It can be running on another server, as long as the Test Agent can communicate with the test controller and the test controller can communicate with the TFS server. If you have several machines in your environment (web server, database server etc..), the test controller can be installed either on one of those machines or on a dedicated machine. To install the test controller, simply mount the Visual Studio Agents media on the server and browse to the vstf_controller.exe file located in the TestController folder. Run through the installation, you might need to reboot the server since it installs .NET 4.5. When the test controller is installed, the Test Controller configuration tool will launch automatically (if it doesn’t, you can start it from the Start menu). Here you will supply the credentials of the account running the test controller service. Note that this account will be given the necessary permissions in TFS during the configuration. Make sure that you have entered a valid account by pressing the Test link. Also, you have to register the test controller with the TFS collection where your test plan is located (and usually the code base of course) When you press Apply Settings, all the configuration will be done. You might get some warnings at the end, that might or might not cause a problem later. Be sure to read them carefully.   For more information about configuring your test controllers, see Setting Up Test Controllers and Test Agents to Manage Tests with Visual Studio 2. Create Standard Environment Now you need to create a Lab environment in Microsoft Test Manager. Since we are using an existing physical or virtual machine we will create a Standard Environment. Open MTM and go to Lab Center. Click New to create a new environment Enter a name for the environment. Since this environment will only contain one machine, we will use the machine name for the environment (TargetServer in this case) On the next page, click Add to add a machine to the environment. Enter the name of the machine (TargetServer.Domain.Com), and give it the Web Server role. The name must be reachable both from your machine during configuration and from the TFS app tier server. You also need to supply an account that is a local administration on the target server. This is needed in order to automatically install a test agent later on the machine. On the next page, you can add tags to the machine. This is not needed in this scenario so go to the next page. Here you will specify which test controller to use and that you want to run UI tests on this environment. This will in result in a Test Agent being automatically installed and configured on the target server. The name of the machine where you installed the test controller should be available on the drop down list (TargetServer in this sample). If you can’t see it, you might have selected a different TFS project collection. Press Next twice and then Verify to verify all the settings: Press finish. This will now create and prepare the environment, which means that it will remote install a test agent on the machine. As part of this installation, the remote server will be restarted. 3-5. Create Test Plan, Run Test Case, Create Coded UI Test I will not cover step 3-5 here, there are plenty of information on how you create test plans and test cases and automate them using Coded UI Tests. In this example I have a test plan called My Application and it contains among other things a test suite called Automated Tests where I plan to put test cases that should be automated and executed as part of the BDT workflow. For more information about Coded UI Tests, see Verifying Code by Using Coded User Interface Tests   6. Associate Coded UI Test with Test Case OK, so now we want to automate our Coded UI Test and have it run as part of the BDT workflow. You might think that you coded UI test already is automated, but the meaning of the term here is that you link your coded UI Test to an existing Test Case, thereby making the Test Case automated. And the test case should be part of the test suite that we will run during the BDT. Open the solution that contains the coded UI test method. Open the Test Case work item that you want to automate. Go to the Associated Automation tab and click on the “…” button. Select the coded UI test that you corresponds to the test case: Press OK and the save the test case For more information about associating an automated test case with a test case, see How to: Associate an Automated Test with a Test Case 7. Create Build Definition using LabDefaultTemplate Now we are ready to create a build definition that will implement the full BDT workflow. For this purpose we will use the LabDefaultTemplate.11.xaml that comes out of the box in TFS 2012. This build process template lets you take the output of another build and deploy it to each target machine. Since the deployment process will be running on the target server, you will have less problem with permissions and firewalls than if you were to remote deploy your solution. So, before creating a BDT workflow build definition, make sure that you have an existing build definition that produces a release build of your application. Go to the Builds hub in Team Explorer and select New Build Definition Give the build definition a meaningful name, here I called it MyApplication.Deploy Set the trigger to Manual Define a workspace for the build definition. Note that a BDT build doesn’t really need a workspace, since all it does is to launch another build definition and deploy the output of that build. But TFS doesn’t allow you to save a build definition without adding at least one mapping. On Build Defaults, select the build controller. Since this build actually won’t produce any output, you can select the “This build does not copy output files to a drop folder” option. On the process tab, select the LabDefaultTemplate.11.xaml. This is usually located at $/TeamProject/BuildProcessTemplates/LabDefaultTemplate.11.xaml. To configure it, press the … button on the Lab Process Settings property First, select the environment that you created before: Select which build that you want to deploy and test. The “Select an existing build” option is very useful when developing the BDT workflow, because you do not have to run through the target build every time, instead it will basically just run through the deployment and test steps which speeds up the process. Here I have selected to queue a new build of the MyApplication.Test build definition On the deploy tab, you need to specify how the application should be installed on the target server. You can supply a list of deployment scripts with arguments that will be executed on the target server. In this example I execute the generated web deploy command file to deploy the solution. If you for example have databases you can use sqlpackage.exe to deploy the database. If you are producing MSI installers in your build, you can run them using msiexec.exe and so on. A good practice is to create a batch file that contain the entire deployment that you can run both locally and on the target server. Then you would just execute the deployment batch file here in one single step. The workflow defines some variables that are useful when running the deployments. These variables are: $(BuildLocation) The full path to where your build files are located $(InternalComputerName_<VM Name>) The computer name for a virtual machine in a SCVMM environment $(ComputerName_<VM Name>) The fully qualified domain name of the virtual machine As you can see, I specify the path to the myapplication.deploy.cmd file using the $(BuildLocation) variable, which is the drop folder of the MyApplication.Test build. Note: The test agent account must have read permission in this drop location. You can find more information here on Building your Deployment Scripts On the last tab, we specify which tests to run after deployment. Here I select the test plan and the Automated Tests test suite that we saw before: Note that I also selected the automated test settings (called TargetServer in this case) that I have defined for my test plan. In here I define what data that should be collected as part of the test run. For more information about test settings, see Specifying Test Settings for Microsoft Test Manager Tests We are done! Queue your BDT build and wait for it to finish. If the build succeeds, your build summary should look something like this:

    Read the article

  • Linux Flash Player with 2 Monitors: always full-screen on primary monitor

    - by CarlF
    My setup at home uses a laptop, with a larger external monitor in addition to the built-in LCD panel, which is primary. I can see the larger monitor from the rest of the room and use it as my TV, for playing DVDs and various types of web video. However, it isn't ideal for Flash video. For instance, if I watch a video from Hulu or any other Flash-based site, I can expand it to full-screen mode. However, no matter which monitor the browser window is on, the full-screen mode is always on the laptop LCD panel, which is both too small and not visible from most of the room. Does anyone know of a way to force the Flash video to play full-screen on the monitor I select instead of the primary? My video chipset is NVidia, using kernel 2.6.31 (Ubuntu). Thanks.

    Read the article

  • Apache2 - setting PERL5LIB via SetEnv under CGI

    - by j0nes
    Hi, my setup is as follows: I have one Apache2 webserver running different vhosts, one vhost is for the production website, the other vhost is for a staging / preview system. Both vhosts have different DocumentRoots and also different (Perl) CGI folders. The used modules for each of these vhosts should be in different directories, so I did the following: <VirtualHost...> ServerName production SetEnv PERL5LIB /home/production/modules </VirtualHost> <VirtualHost...> ServerName staging SetEnv PERL5LIB /home/staging/modules </VirtualHost> However, I just noticed that in my Perl CGI scripts, both paths get filled into my @INC, so I can not separate the staging modules from the production modules, e.g. the SetEnv directive is not limited to a single virtual host, but seems to work globally. How can I solve this? Thanks! Jonas

    Read the article

  • Windows Server 2003 R2 Standard: Locks MS Office files, but not Adobe .AI and .PSD files?

    - by Bruce Garlock
    We have some shares setup on a Windows 2003 R2 server, and the MS Office files people save behave properly: The first person to open the file gets read/write, and the second person to open the file while the first person still has the file open, gets a read-only version. This is not true for the graphics files, like Adobe Illustrator .AI files, and Photoshop .PSD files. Anyone who goes to open these files has full read/write, even if someone else is already working on the file! This has lead to numerous file corruption issues, as well as other lost work, since it always saves the last changes to the file. How do we get Windows to properly lock these files so when someone is working on a file, and someone else wants to open one, they get read-only access? Many thanks, Bruce

    Read the article

  • Where can I compare monitors with a given VESA mount?

    - by Dan Rasmussen
    I am looking into purchasing a dual-monitor setup, and need to purchase two monitors with VESA MIS-D mounts. My only problem is that that information doesn't seem to be readily available on most shopping websites. Neither Amazon nor Newegg seem to have the information searchable or filterable. I could shop for monitors, then Google around to see if they support VESA MIS-D, but is there a better way? Is there a resource (not necessarily a store - once I find a monitor I can shop elsewhere) where I can browse a variety of monitor specs and reviews while only looking at monitors with a certain VESA mount?

    Read the article

  • Unable to configure Ruby with readline

    - by Liam Berg
    1) ./configure --prefix=$HOME/.packages --with-readline-dir=$HOME/.packages 2) configure: WARNING: unrecognized options: --with-readline-dir I am trying to setup the most up-to-date version of Ruby on my webhost (I do not have sudo access). Line 1 is the configure command I used for Ruby and Line 2 is the first printed line after executing 'configure'. I've googled this issue and found other people with the same problem but there aren't any real solutions. There are no warnings or errors when configuring/compiling readline-6.1. I am pretty stumped, any help/insight would be greatly appreciated. Thanks ahead of time.

    Read the article

  • Log php errors in ubuntu

    - by resting
    I followed the setup here: Where is the PHP error log When I look into /var/log/php_errors.log, I could see some PHP errors. PHP Warning: file_get_contents(/var/www/...): failed to open stream: No such file or directory in ... But what I'm trying to see is the error when I removed a semicolon from a statement. That error above has no relation to file from where I removed the semicolon so we can just ignore that. When I access the page with the removed semicolon, I get The website encountered an error while retrieving https://myapp/download/decode/testfile. It may be down for maintenance or configured incorrectly. HTTP Error 500 (Internal Server Error): An unexpected condition was encountered while the server was attempting to fulfill the request. But no logs in /var/log/php_errors.log. How do I see the error that usually says which line and which file the process failed? The real reason for trying to see the error is because I have a very huge loop, that throws the HTTP 500 error and I can't see the exact error. I'm just simulation with a removed semicolon to test things out. Other settings: error_reporting = E_ALL & ~E_DEPRECATED display_errors = On On Ubuntu 10.04.4 LTS Update Ok, I managed to get the error message to display. Parse error: syntax error, unexpected T_IF in ... However, it's still not logged. It wasn't displaying previously because Cakephp's debug level was at 0. Setting it to 2 displays the message, but no logs.

    Read the article

  • How to Upload Really Large Files to SkyDrive, Dropbox, or Email

    - by Matthew Guay
    Do you need to upload a very large file to store online or email to a friend? Unfortunately, whether you’re emailing a file or using online storage sites like SkyDrive, there’s a limit on the size of files you can use. Here’s how to get around the limits. Skydrive only lets you add files up to 50 MB, and while the Dropbox desktop client lets you add really large files, the web interface has a 300 MB limit, so if you were on another PC and wanted to add giant files to your Dropbox, you’d need to split them. This same technique also works for any file sharing service—even if you were sending files through email. There’s two ways that you can get around the limits—first, by just compressing the files if you’re close to the limit, but the second and more interesting way is to split up the files into smaller chunks. Keep reading for how to do both. Latest Features How-To Geek ETC The How-To Geek Guide to Learning Photoshop, Part 8: Filters Get the Complete Android Guide eBook for Only 99 Cents [Update: Expired] Improve Digital Photography by Calibrating Your Monitor The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography How to Choose What to Back Up on Your Linux Home Server How To Harmonize Your Dual-Boot Setup for Windows and Ubuntu Hang in There Scrat! – Ice Age Wallpaper How Do You Know When You’ve Passed Geek and Headed to Nerd? On The Tip – A Lamborghini Theme for Chrome and Iron What if Wile E. Coyote and the Road Runner were Human? [Video] Peaceful Winter Cabin Wallpaper Store Tabs for Later Viewing in Opera with Tab Vault

    Read the article

  • Bootstrap a debian build environment and build source packages with no root privileges

    - by Erwan Queffélec
    On debian squeeze, I am trying to do the following : fetch sources package from the wheezy source repository bootstrap a squeeze chroot for several architectures build the packages for several architectures (i386, amd64 + all and any) I want both the fetching, bootstrapping and build operation to be scriptable, repeatable, and run as a normal user. For the environment setup, I want to make as little use of the root account as possible (install the necessary dependencies, and maybe some visudo stuff). If possible I would like to avoid using a VM (pbuilder with user mode linux) So far I have tried several things with pbuilder (require root), debootstrap (require root) with little success. Here is an example script of what I want to do (does not work): #/bin/bash set -e set -x this=`readlink -f $0` this_dir=`dirname $this` archs='i386 amd64 any all' pushd $this_dir/src # I actually want the following line to work with a repo that # is not in /etc/apt/sources.list but that is another question apt-get source cyrus-imapd-2.4 popd for arch in $archs do build_dir=$this_dir/build/$arch/ pbuilder --create --configfile $build_dir/pbuilderrc --buildresult $build_dir/ pbuilder --build --configfile $build_dir/pbuilderrc --buildresult \ $build_dir/ $this_dir/src/*.dsc # of course I want to use the .dput.cf in /home/myuser/ # and not in /root/ dput $build_dir/$arch/*.changes done

    Read the article

  • VPN not connecting after resuming from standby on Vista home premium

    - by joe
    Hello, I am having windows vista home premium OS on my laptop. I am having a windows VPN setup too. All of a sudden my vpn doesnt work when my PC resumed from stand by mode. I tried all options but still I am getting that error. CoId={5B82082D-9236-4AF9-900B-4E8072341C76}: The user workgroup\user dialed a connection named VPNName which has failed. The error code returned on failure is 0. I am also getting the following error when I verified the event logs The user workgroup\user dialed a connection named VpnName which has failed. The error code returned on failure is 800. Please help.

    Read the article

  • SVN Authz - Any Subfolder permission or List contents

    - by Jaspa Jones
    Goal Basically I would like SVN users to be able to browse through a directory containing a lot of subfolders without allowing them to read its subfolders. [/] * = r [/Projects] * = # Allow viewing contents, but not reading. At least to be able to see Project1. [/Projects/Project1] my_group = rw Problem The problem is that there are a lot of projects. I could add every other project and make them disappear for the user, but that would be a lot of work to maintain. It would look like this: [/] * = r [/Projects] * = r [/Projects/Project1] my_group = rw [/Projects/Project2] * = [/Projects/Project3] * = [/Projects/Project4] * = [/Projects/Project5] * = It would be nice if I could use this: [/Projects/*] * = Any ideas? Thanks in advance, Jaspa Jones

    Read the article

  • setting up eclim to support php

    - by tipu
    i have the plugin pdt installed with my eclim using: DISPLAY=:1 ./eclipse/eclipse -nosplash -consolelog -debug \ -application org.eclipse.equinox.p2.director \ -repository http://download.eclipse.org/releases/helios \ -installIU org.eclipse.php.feature.group i compiled the thing using dargs for php: ant -Declipse.home=/home/tipu/downloads/eclipse -Dplugins=php but creating a project gives me: java.lang.IllegalArgumentException: Unable to find nature for alias 'php'. Supported aliases include: javascript=org.eclipse. wst.jsdt.core.jsNature, java=org.eclipse.jdt.core.javanature while executing command (port: 9091): -editor vim -command project_create -f "/home/tipu/phpproj2/" -n php thoughts on how to fix?

    Read the article

  • MySQL started to consume about 40% of system CPU time and got unresponsive suddenly

    - by Alex
    I use Debian 6.0.3 x86_64 and MySQL 5.5.20-1~dotdeb.0-log from the Dotdeb repository. MySQL process started to consume a lot of "sy" CPU time serveral hours ago according to this graph. I was not able to connect to running mysqld process and had to kill it. I found nothing useful in logs. My setup seems to be quite common (I assume Dotdeb just redistributes stock MySQL versions) and I have never seen anything like this before. What is the possible root cause of this? How can I prevent this situation in the future?

    Read the article

  • How do I classify using GLCM and SVM Classifier in Matlab?

    - by Gomathi
    I'm on a project of liver tumor segmentation and classification. I used Region Growing and FCM for liver and tumor segmentation respectively. Then, I used Gray Level Co-occurence matrix for texture feature extraction. I have to use Support Vector Machine for Classification. But I don't know how to normalize the feature vectors. Can anyone tell how to program it in Matlab? To the GLCM program, I gave the tumor segmented image as input. Was I correct? If so, I think, then, my output will also be correct. My glcm coding, as far as I have tried is, I = imread('fzliver3.jpg'); GLCM = graycomatrix(I,'Offset',[2 0;0 2]); stats = graycoprops(GLCM,'all') t1= struct2array(stats) I2 = imread('fzliver4.jpg'); GLCM2 = graycomatrix(I2,'Offset',[2 0;0 2]); stats2 = graycoprops(GLCM2,'all') t2= struct2array(stats2) I3 = imread('fzliver5.jpg'); GLCM3 = graycomatrix(I3,'Offset',[2 0;0 2]); stats3 = graycoprops(GLCM3,'all') t3= struct2array(stats3) t=[t1;t2;t3] xmin = min(t); xmax = max(t); scale = xmax-xmin; tf=(x-xmin)/scale Was this a correct implementation? Also, I get an error at the last line. My output is: stats = Contrast: [0.0510 0.0503] Correlation: [0.9513 0.9519] Energy: [0.8988 0.8988] Homogeneity: [0.9930 0.9935] t1 = Columns 1 through 6 0.0510 0.0503 0.9513 0.9519 0.8988 0.8988 Columns 7 through 8 0.9930 0.9935 stats2 = Contrast: [0.0345 0.0339] Correlation: [0.8223 0.8255] Energy: [0.9616 0.9617] Homogeneity: [0.9957 0.9957] t2 = Columns 1 through 6 0.0345 0.0339 0.8223 0.8255 0.9616 0.9617 Columns 7 through 8 0.9957 0.9957 stats3 = Contrast: [0.0230 0.0246] Correlation: [0.7450 0.7270] Energy: [0.9815 0.9813] Homogeneity: [0.9971 0.9970] t3 = Columns 1 through 6 0.0230 0.0246 0.7450 0.7270 0.9815 0.9813 Columns 7 through 8 0.9971 0.9970 t = Columns 1 through 6 0.0510 0.0503 0.9513 0.9519 0.8988 0.8988 0.0345 0.0339 0.8223 0.8255 0.9616 0.9617 0.0230 0.0246 0.7450 0.7270 0.9815 0.9813 Columns 7 through 8 0.9930 0.9935 0.9957 0.9957 0.9971 0.9970 ??? Error using ==> minus Matrix dimensions must agree. The images are:

    Read the article

  • transaction log shipping sql server 2005 to 2008

    - by Andrew Jahn
    I have a reporting setup with SSRS on our sql server 2005 database. Because sql server 2008 is not supported by the main program which populates our database we are stuck with 2005 on our prod database. Unfortunately when I run our weekly check reports the web interface constantly times out because the server cant do the conversion to PDF. I've read that sql server 2008's SSRS is ALOT better with memory management. I was wondering if I can do some kind transact log shipping subscription publication from 2005 to 2008? Am I chasing a dream here. Currently I have to open up the ssrs project in visual studio and run the reports inside because it doesn't ever time out when doing the pdf conversion, only times out if I try to run it through the ssis web interface.

    Read the article

  • Components needed for a VPN

    - by Anriëtte Combrink
    Hi there, first of all, I asked this question on SuperUser.com, but no one there could help me. Well, we eventually got our Mac Mini Server. We now want to set up a small Remote Access VPN using this Mac Mini Server. Firstly we are not sure of the components needed additionally to the server to setup this VPN. We currently have the following: 1 Mac Mini Server 1 Firewall Router (Billion 802.11g ADSL2+ router with VPN capabilities [it says so on the box]), we currently use this as our internet connection and resides X.X.0.1 on our network. 4Mbps ADSL connection (Should this line have VPN capability enabled by the service provider?) We are not sure what else needs to be included to enable our small VPN. Any advice would be really helpful?

    Read the article

  • virtualizing a .net application

    - by vvavepacket
    My professor ask me to create an application and it will be run on a machine without .NET installed. My application utilizes .NET 4.0 (just basic c# necessites but without the LINQ and complicated references) and he only wants executable file. So how can we run this application on his machine without install .NET (like the setup files and stuff)???? I was thinking of virtualizing via VMware ThinApp, but I find it difficult since it requires the application to be installed, yet my professor only wants the application to be in exe format (so thinapp cannot track the changes in my system). Any alternative? suggestions?

    Read the article

  • Guest Post: Using IronRuby and .NET to produce the &lsquo;Hello World of WPF&rsquo;

    - by Eric Nelson
    [You might want to also read other GuestPosts on my blog – or contribute one?] On the 26th and 27th of March (2010) myself and Edd Morgan of Microsoft will be popping along to the Scottish Ruby Conference. I dabble with Ruby and I am a huge fan whilst Edd is a “proper Ruby developer”. Hence I asked Edd if he was interested in creating a guest post or two for my blog on IronRuby. This is the second of those posts. If you should stumble across this post and happen to be attending the Scottish Ruby Conference, then please do keep a look out for myself and Edd. We would both love to chat about all things Ruby and IronRuby. And… we should have (if Amazon is kind) a few books on IronRuby with us at the conference which will need to find a good home. This is me and Edd and … the book: Order on Amazon: http://bit.ly/ironrubyunleashed Using IronRuby and .NET to produce the ‘Hello World of WPF’ In my previous post I introduced, to a minor extent, IronRuby. I expanded a little on the basics of by getting a Rails app up-and-running on this .NET implementation of the Ruby language — but there wasn't much to it! So now I would like to go from simply running a pre-existing project under IronRuby to developing a whole new application demonstrating the seamless interoperability between IronRuby and .NET. In particular, we'll be using WPF (Windows Presentation Foundation) — the component of the .NET Framework stack used to create rich media and graphical interfaces. Foundations of WPF To reiterate, WPF is the engine in the .NET Framework responsible for rendering rich user interfaces and other media. It's not the only collection of libraries in the framework with the power to do this — Windows Forms does the trick, too — but it is the most powerful and flexible. Put simply, WPF really excels when you need to employ eye candy. It's all about creating impact. Whether you're presenting a document, video, a data entry form, some kind of data visualisation (which I am most hopeful for, especially in terms of IronRuby - more on that later) or chaining all of the above with some flashy animations, you're likely to find that WPF gives you the most power when developing any of these for a Windows target. Let's demonstrate this with an example. I give you what I like to consider the 'hello, world' of WPF applications: the analogue clock. Today, over my lunch break, I created a WPF-based analogue clock using IronRuby... Any normal person would have just looked at their watch. - Twitter The Sample Application: Click here to see this sample in full on GitHub. Using Windows Presentation Foundation from IronRuby to create a Clock class Invoking the Clock class   Gives you The above is by no means perfect (it was a lunch break), but I think it does the job of illustrating IronRuby's interoperability with WPF using a familiar data visualisation. I'm sure you'll want to dissect the code yourself, but allow me to step through the important bits. (By the way, feel free to run this through ir first to see what actually happens). Now we're using IronRuby - unlike my previous post where we took pure Ruby code and ran it through ir, the IronRuby interpreter, to demonstrate compatibility. The main thing of note is the very distinct parallels between .NET namespaces and Ruby modules, .NET classes and Ruby classes. I guess there's not much to say about it other than at this point, you may as well be working with a purely Ruby graphics-drawing library. You're instantiating .NET objects, but you're doing it with the standard Ruby .new method you know from Ruby as Object#new — although, the root object of all your IronRuby objects isn't actually Object, it's System.Object. You're calling methods on these objects (and classes, for example in the call to System.Windows.Controls.Canvas.SetZIndex()) using the underscored, lowercase convention established for the Ruby language. The integration is so seamless. The fact that you're using a dynamic language on top of .NET's CLR is completely abstracted from you, allowing you to just build your software. A Brief Note on Events Events are a big part of developing client applications in .NET as well as under every other environment I can think of. In case you aren't aware, event-driven programming is essentially the practice of telling your code to call a particular method, or other chunk of code (a delegate) when something happens at an unpredictable time. You can never predict when a user is going to click a button, move their mouse or perform any other kind of input, so the advent of the GUI is what necessitated event-driven programming. This is where one of my favourite aspects of the Ruby language, blocks, can really help us. In traditional C#, for instance, you may subscribe to an event (assign a block of code to execute when an event occurs) in one of two ways: by passing a reference to a named method, or by providing an anonymous code block. You'd be right for seeing the parallel here with Ruby's concept of blocks, Procs and lambdas. As demonstrated at the very end of this rather basic script, we are using .NET's System.Timers.Timer to (attempt to) update the clock every second (I know it's probably not the best way of doing this, but for example's sake). Note: Diverting a little from what I said above, the ticking of a clock is very predictable, yet we still use the event our Timer throws to do this updating as one of many ways to perform that task outside of the main thread. You'll see that all that's needed to assign a block of code to be triggered on an event is to provide that block to the method of the name of the event as it is known to the CLR. This drawback to this is that it only allows the delegation of one code block to each event. You may use the add method to subscribe multiple handlers to that event - pushing that to the end of a queue. Like so: def tick puts "tick tock" end timer.elapsed.add method(:tick) timer.elapsed.add proc { puts "tick tock" } tick_handler = lambda { puts "tick tock" } timer.elapsed.add(tick_handler)   The ability to just provide a block of code as an event handler helps IronRuby towards that very important term I keep throwing around; low ceremony. Anonymous methods are, of course, available in other more conventional .NET languages such as C# and VB but, as usual, feel ever so much more elegant and natural in IronRuby. Note: Whether it's a named method or an anonymous chunk o' code, the block you delegate to the handling of an event can take arguments - commonly, a sender object and some args. Another Brief Note on Verbosity Personally, I don't mind verbose chaining of references in my code as long as it doesn't interfere with performance - as evidenced in the example above. While I love clean code, there's a certain feeling of safety that comes with the terse explicitness of long-winded addressing and the describing of objects as opposed to ambiguity (not unlike this sentence). However, when working with IronRuby, even I grow tired of typing System::Whatever::Something. Some people enjoy simply assuming namespaces and forgetting about them, regardless of the language they're using. Don't worry, IronRuby has you covered. It is completely possible to, with a call to include, bring the contents of a .NET-converted module into context of your IronRuby code - just as you would if you wanted to bring in an 'organic' Ruby module. To refactor the style of the above example, I could place the following at the top of my Clock class: class Clock include System::Windows::Shape include System::Windows::Media include System::Windows::Threading # and so on...   And by doing so, reduce calls to System::Windows::Shapes::Ellipse.new to simply Ellipse.new or references to System::Windows::Threading::DispatcherPriority.Render to a friendlier DispatcherPriority.Render. Conclusion I hope by now you can understand better how IronRuby interoperates with .NET and how you can harness the power of the .NET framework with the dynamic nature and elegant idioms of the Ruby language. The manner and parlance of Ruby that makes it a joy to work with sets of data is, of course, present in IronRuby — couple that with WPF's capability to produce great graphics quickly and easily, and I hope you can visualise the possibilities of data visualisation using these two things. Using IronRuby and WPF together to create visual representations of data and infographics is very exciting to me. Although today, with this project, we're only presenting one simple piece of information - the time - the potential is much grander. My day-to-day job is centred around software development and UI design, specifically in the realm of healthcare, and if you were to pay a visit to our office you would behold, directly above my desk, a large plasma TV with a constantly rotating, animated slideshow of charts and infographics to help members of our team do their jobs. It's an app powered by WPF which never fails to spark some conversation with visitors whose gaze has been hooked. If only it was written in IronRuby, the pleasantly low ceremony and reduced pre-processing time for my brain would have helped greatly. Edd Morgan blog Related Links: Getting PhP and Ruby working on Windows Azure and SQL Azure

    Read the article

  • Using SharePoint PeoplePicker control in custom ASP.NET pages

    - by Jignesh Gangajaliya
    I was developing custom ASP.NET page for a SharePoint project, and the page uses SharePoint PeoplePicker control. I needed to manipulate the control on the client side based on the user inputs. PeoplePicker Picker is a complex control and the difficult bit is that it contains many controls on the page (use the page source viewer to see the HTML tags generated). So getting into the right bit is tricky and also the default JavaScript functions like, control.disabled; control.focus(); will not work with PeoplePicker control. After some trial and error I came up with the solution to manipulate the control using JavaScript.  Here I am posting the JavaScript code snippet to enable/disable the PeoplePicker Control: function ToggleDisabledPeoplePicker(element, isDisabled) {     try     {         element.disabled = isDisabled;     }            catch(exception)     {}            if ((element.childNodes) && (element.childNodes.length > 0))     {         for (var index = 0; index < element.childNodes.length; index++)         {             ToggleDisabledPeoplePicker(element.childNodes[index], isDisabled);         }     } } // to disable the control ToggleDisabledPeoplePicker(document.getElementById("<%=txtMRA.ClientID%>"), true); The script shown below can be used to set focus back to the PeoplePicker control from custom JavaScript validation function: var found = false;         function SetFocusToPeoplePicker(element) {     try     {         if (element.id.lastIndexOf("upLevelDiv") !=-1)         {             element.focus();             found = true;             return;         }     }             catch(exception)     {}             if ((element.childNodes) && (element.childNodes.length > 0))     {         for (var index = 0; index < element.childNodes.length; index++)         {             if (found)             {                 found = false;                 break;             }                      SetFocusToPeoplePicker(element.childNodes[index]);         }     } } - Jignesh

    Read the article

  • SQLAuthority News – Community Tech Days – TechEd on The Road – Ahmedabad – June 11, 2011

    - by pinaldave
    TechEd on Road is back! In Ahmedabad June 11, 2011! Inviting all Professional Developers, Project Managers, Architects, IT Managers, IT Administrators and Implementers of Ahmedabad to be a part of Tech•Ed on the Road, on 11th June, 2011. We have put together the best sessions from Tech•Ed India 2011 for you in your city. Focal point will be technologies like Database and BI, Windows 7, ASP.NET. REGISTER HERE! Venue: Venue: Ahmedabad Management Association (AMA) Dr. Vikram Sarabhai Marg, University Area, Ahmedabad, Gujarat 380 015 Time: 9:30AM – 5:30PM The biggest attraction of the event is session HTML5 – Future of the Web by Harish Vaidyanathan. He is Evangelist Lead in Microsoft and hands on developer himself. I strongly urge all of you to attend his session to understand direction of the web and Microsoft’s take on the subject. I (Pinal Dave) will be presenting on the session of SQL Server Performance Tuning and Jacob Sebastian will be presenting on T-SQL Worst Practices. Do not miss this opportunity. Those who have attended in the past know that from last two years the venue is jam packed in first few minutes. Do come in early to get better seat and reserve your spot. We will have QUIZ during the event and we will have various gifts – Watches, USB Drives, T-Shirts and many more interesting gifts. Refer the agenda today and register right away. There will be no video recording so come and visit the event in person. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Best Practices, Database, DBA, MVP, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

< Previous Page | 944 945 946 947 948 949 950 951 952 953 954 955  | Next Page >