Search Results

Search found 18390 results on 736 pages for 'boost build'.

Page 470/736 | < Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >

  • Material System

    - by Towelie
    I'm designing Material/Shader System (target API DX10+ and may be OpenGL3+, now only DX10). I know, there was a lot of topics about this, but i can't find what i need. I don't want to do some kind of compilation/parsing scripts in real-time. So there some artist-created material, written at some analog of CG. After it compiled to hlsl code and after to final shader. Also there are some hard-coded ConstantBuffers, like cbuffer EveryFrameChanging { float4x4 matView; float time; float delta; } And shader use shared constant buffers to get parameters. For each mesh in the scene, getting needs and what it can give (normals, binormals etc.) and finding corresponding permutation of shader or calculating missing parts. Also, during build calculating render states and the permutations or hash for this shader which later will be used for sorting or even giving the ID from 0 to ShaderCount w/o gaps to it for sorting. FinalShader have only 1 technique and one pass. After it for each Mesh setting some shader and it's good to render. some pseudo code SetConstantBuffer(ConstantBuffer::PerFrame); foreach (shader in FinalShaders) SetConstantBuffer(ConstantBuffer::PerShader, shader); SetRenderState(shader); foreach (mesh in shader.GetAllMeshes) SetConstantBuffer(ConstantBuffer::PerMesh, mesh); SetBuffers(mesh); Draw(); class FinalShader { public: UUID m_ID; RenderState m_RenderState; CBufferBindings m_BufferBindings; } But i have no idea how to create this CG language and do i really need it?

    Read the article

  • Prevent member of administrator group loging in via Remote Desktop

    - by Chris J
    In order to support some build processes on our Server 2003 development servers, we require a common user account that has administrative privs. Unfortuantly, this also means that anyone that knows the password can also gain admin privs on a server. Assume that trying to keep the password secret is a failed exercise. Developers that need admin privs already have admin privs so should be able to log in as themselves. So the question is a simple one: is there anything I can configure to prevent people (ab)using the account to gain administrator on servers they shouldn't have administrator on? I'm aware that devs could disable anything that is put in place, but that's then down to process and auditing to track and manage. I don't mind where or how: it can be via the local security policy, group policy, a batch file executed in the user's profile, or something else.

    Read the article

  • Why not commit unresolved changes?

    - by Explosion Pills
    In a traditional VCS, I can understand why you would not commit unresolved files because you could break the build. However, I don't understand why you shouldn't commit unresolved files in a DVCS (some of them will actually prevent you from committing the files). Instead, I think that your repository should be locked from pushing and pulling, but not committing. Being able to commit during the merging process has several advantages (as I see it): The actual merge changes are in history. If the merge was very large, you could make periodic commits. If you made a mistake, it would be much easier to roll back (without having to redo the entire merge). The files could remain flagged as unresolved until they were marked as resolved. This would prevent pushing/pulling. You could also potentially have a set of changesets act as the merge instead of just a single one. This would allow you to still use tools such as git rerere. So why is committing with unresolved files frowned upon/prevented? Is there any reason other than tradition?

    Read the article

  • Simple Introduction to using the Enterprise Manager SOA/BPM Facade API by Jaideep Ganguli

    - by JuergenKress
    There may be times when you need to expose just a small section of what is displayed in the Enterprise Manager console for SOA/BPM (EM console). A simple example can be where stakeholders on the systems integration or customer teams want to monitor a dashboard of statistics on how many instances of a composite have been created and how many have faulted. You can see this in the EM, as shown below Some of these stakeholders may not have knowledge of  EM console and they just want a quick view into the statistics, without having to navigate EM. This post describes how to use the Oracle Fusion Middleware Infrastructure Management Java API  for Oracle SOA Suite (also called the Facade API)  to build a custom ADF page to display this information. If you want a quick introduction in using the Facade API, this post is for you. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Enterprise Manager,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Forcing rsync to convert file names to lower case

    - by SvrGuy
    We are using rsync to transfer some (millions) files from a Windows (NTFS/CYGWIN) server to a Linux (RHEL) server. We would like to force all file and directory names on the linux box to be lower case. Is there a way to make rsync automagically convert all file and directory names to lower case? For example, lets say the source file system had a file named: /foo/BAR.gziP Rsync would create (on the destination system) /foo/bar.gzip Obviously, with NTFS being a case insensitive file system there can not be any conflicts... Failing the availability of an rsync option, is there an enhanced build or some other way to achieve this effect? Perhaps a mount option on CYGWIN? Perhaps a similar mount option on Linux? Its RHEL, in case that matters.

    Read the article

  • How to create a bootable system with a squashfs root

    - by cldfzn
    My goal is to be able to take a customized root file system loaded with the software I want. So far I've created a squashed filesystem using debootstrap and chroot to install the software I want on the system. The problem I am now running in to.. whenever I boot in to the system, my user accounts that were set up in the chroot do not work. First boot everything works out, second boot I can't log in. That is baffling to me. Any one know a reason or a place to start looking? Update To get a working system with a squashfs filesystem: sudo apt-get install live-boot live-boot-initramfs-tools extlinux sudo update-initramfs -u Create a squashfs file from a bootstrapped or running ubuntu filesystem with whatever packages you want available. https://help.ubuntu.com/community/LiveCDCustomizationFromScratch provides good instructions for creating a debootstrapped system to build on. Format the target drive with ext2/3/4 and enable the bootable flag. Create the folder layout on the target drive and install extlinux: mkdir -p ${TARGET}/boot/extlinux ${TARGET}/live extlinux -i ${TARGET}/boot/extlinux dd if=/usr/lib/syslinux/mbr.bin of=/dev/sdX #X is the drive letter cp /boot/vmlinuz-$(uname -r) ${TARGET}/boot/vmlinuz cp /boot/initrd.img-$(uname -r) ${TARGET}/boot/initrd cp filesystem.squashfs ${TARGET}/live Create ${TARGET}/boot/extlinux/extlinux.conf with the following contents: DEFAULT Live LABEL Live KERNEL /boot/vmlinuz APPEND initrd=/boot/initrd boot=live toram=filesystem.squashfs TIMEOUT 10 PROMPT 0 Now you should be able to boot from the target drive in to your squashed system.

    Read the article

  • creating a tag-based website and not using programming?

    - by monodial
    I want to create a tag-based website, and I need a tool that I could use (preferably without programming). It's a site where a user could pick tags on a certain item. All tags will be placed under a group that they are logically linked to (I will do that by hand). On the other end - a visitor could choose a tag, and then be redirected to a few items on which that tag was selected the most. Besides this, I need to set up a registration form (for the visitors who want to select tags on a desired item). stackoverflow.com may serve as an example of what I want to achieve. Functionally it is a quite similar approach. I am not sure if further detailing will bring me closer to getting a development advice, but nevertheless - following this template what I would be missing on is: ability to categorize the tags; and so they would fit in one page (overall i assume <200 tags) box where a user could enter a tag and it would be pending until a certain number of users enter such tag ability to limit the number of 'questions' that appear when a visitor chooses a tag - 'question' stands for an item to which users are selecting tags (displayed items would depend on the frequency the tag was assigned - say the top two items) Which software should I try / How should I go about it? Thank you. Lukas P.S. I have bought hosting account through GoDaddy.com. This is a first website that I am trying to build.

    Read the article

  • Javascript is not loading

    - by Oden
    Hey, I've got a problem with JavaScript under Ubuntu, that drives me crazy. I'm using Gedit for my web sites since I'm an Ubuntu user. When I start a new website I create (usually with the gnome terminal) folder structure, and I copy the files I need into them. The next step is creating an index.html where I build the design and basic JavaScript functionality. JavaScript is stored in a sub-folder of the project and when i try to load one using the tag in the header, my whole page body disappears. If the source contains a script tag with its own body, and its not the first its code wont run. I've tried to solve the problem by setting chmod to 777 with sudo chmod -R 777 . but nothing changed. CSS is loading correctly, but JS isn't. I'm using the newest version of apache, no mod_rewrite stuff, but i get the same problem when I run the html from file (file:///...) Do anyone know how to solve this problem?

    Read the article

  • Why can't I compile this version of Postfix?

    - by Coofucoo
    I just installed postfix 2.7.11 in Ubuntu server from source code. I do not use the ubuntu own one because I need the old version. I found a very interesting problem. Before, in both CentOS 5 and 6, I can build the source code without any problem. But, in Ubuntu server 12.04 is totally different. I got the following problems: dict_nis.c:173: error: undefined reference to 'yp_match' dict_nis.c:187: error: undefined reference to 'yp_match' dns_lookup.c:347: error: undefined reference to '__dn_expand' dns_lookup.c:218: error: undefined reference to '__res_search' dns_lookup.c:287: error: undefined reference to '__dn_expand' dns_lookup.c:498: error: undefined reference to '__dn_expand' dns_lookup.c:383: error: undefined reference to '__dn_expand' Yes, this reason is obviously. I just search related library and add it to the makefile. It works. The question is why? What is the difference between Ubuntu Server and CentOS? One possibility is gcc and ld version. Ubuntu server use different version of gcc and ld with CentOS. But I am not sure.

    Read the article

  • Nvidia GeForce Gt-520M-cn on intel dh61ww Ubuntu 12.04

    - by j goseeped
    hi people i hope you can help a little bit , i appreciate your time look: i have a this desktop i7 2600, 8gb ram ddr3, board intel dh61ww, Geforce Nvidia GT520-cn 2Gb ddr3, i just install ubuntu 64bits 12.04 kernel 3.2.0-23-generic , i want to setup two monitors samsung led 22" and get start mi video card 1) i download and installed nvidia driver 295.59 and also try with 302.17 to apt-update and upgrade, apt-get install build-essential linux-headers-$(uname -r), apt-get remove --purge nvidia*, apt-get remove --purge xserver-xorg-video-nouveau, vim /etc/modprobe.d/blacklist.conf blacklist vga16fb blacklist nouveau blacklist rivafb blacklist nvidiafb blacklist rivatv sh NVIDIA.run, sudo service lightdm start, reboot, nvidia-xorgconf 2)after reboot i get 800x600 and nvidia-settings say this. You do not appear to be using the NVIDIA X driver. Please edit your X configuration file (just run nvidia-xconfig as root), and restart the X server. 3) i change a little bit xorg.conf to set up a resolution to work property 4) i dont have any image in the monito and i dont have any option on Nvidia X server settings lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GF119 [GeForce GT 520] (rev a1) egrep -i 'glx|nvidia' /var/log/Xorg.0.log [ 12.005] (II) LoadModule: "glx" [ 12.005] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so [ 12.575] (II) Module glx: vendor="NVIDIA Corporation" [ 12.585] (II) NVIDIA GLX Module 302.17 Tue Jun 12 16:22:45 PDT 2012 [ 12.585] (II) Loading extension GLX [ 13.037] (EE) Failed to initialize GLX extension (Compatible NVIDIA X driver not found) [ 13.044] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=3 (/dev/input/event10) [ 13.044] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=7 (/dev/input/event9) glxinfo | grep direct Xlib: extension "GLX" missing on display ":0.0". Error: couldn't find RGB GLX visual or fbconfig sorry my english is no very well. and thanks guys

    Read the article

  • When using RAID10 + BBWC why is it better to separate PostgreSQL data files from OS and transaction logs than to keep them all on the same array?

    - by Vlad
    I've seen the advice everywhere (including here and here): keep your OS partition, DB data files and DB transaction logs on separate discs/arrays. The general recommendation is to use RAID1 for OS, RAID10 for data (or RAID5 if load is very read-biased) and RAID1 for transaction logs. However, considering that you will need at least 6 or 8 drives to build this setup, wouldn't a RAID10 over 6-8 drives with BBWC perform better? What if the drives are SSDs? I'm talking here about internal server drives, not SAN.

    Read the article

  • Custom PC won't boot Windows 7 dvd but does with windows vista

    - by M_rk
    I ordered a custom build PC. (It was assembled by the store). This is the setup Motherboard: Asrock A75 PRO4-M DVD drive: LG GH24NS90 SSD: Samsung 830 series 128GB DDR3 SDRAM: 2× Corsair XMS3 CMX4GX3M1A1600C9 (2× 4GB) APU (CPU+GPU): AMD A8-3850 Boxed I got a installation DVD for Windows 7 Professional x64 English (including SP1), but it doesn't work. I got a new one from the store and it doesn't work either. However they work on a other PC. So the DVDs aren't bad. I tried an old installation DVD for Windows Vista. Both 32 bit and 64 bit work. So the boot order and such are right and working on the new PC. Is there something I'm missing here? Any ideas on how to make it work?

    Read the article

  • Using pkexec policy to run out of /opt/

    - by liberavia
    I still try to make it possible to run my app with root priveleges. Therefore I created two policies to run the application via pkexec (one for /usr/bin and one for /opt/extras... ) and added them to the setup.py: data_files=[('/usr/share/polkit-1/actions', ['data/com.ubuntu.pkexec.armorforge.policy']), ('/usr/share/polkit-1/actions', ['data/com.ubuntu.extras.pkexec.armorforge.policy']), ('/usr/bin/', ['data/armorforge-pkexec'])] ) additionally I added a startscript which uses pkexec for starting the application. It distinguishes between the two places and is used in the Exec-Statement of the desktopfile: #!/bin/sh if [ -f /opt/extras.ubuntu.com/armorforge/bin/armorforge ]; then pkexec "/opt/extras.ubuntu.com/armorforge/bin/armorforge" "$@" else pkexec `which armorforge` "$@" fi If I simply do a quickly package everything will work right. But if I package with extras option: quickly package --extras the Exec-statement will be exchanged. Even if I try to simulate the pkexec call via armorforge-pkexec It will aks for a password and then returns this: andre@andre-desktop:~/Entwicklung/Ubuntu/armorforge$ armorforge-pkexec (armorforge:10108): GLib-GIO-ERROR **: Settings schema 'org.gnome.desktop.interface' is not installed Trace/breakpoint trap (core dumped) So ok, I could not trick the opt-thing. How can I make sure, that my Application will run with root priveleges out of opt. I copied the way of using pkexec from synaptic. My application is for communicating with apparmor which currently has no dbus interface. Else I need to write into /etc/apparmor.d-folder. How should I deal with the opt-build which, as far as I understand, is required to submit my application to the ubuntu software center. Thanks for any hints and/or links :-)

    Read the article

  • Did C++11 address concerns passing std lib objects between dynamic/shared library boundaries? (ie dlls and so)?

    - by Doug T.
    One of my major complaints about C++ is how hard in practice it is to pass std library objects outside of dynamic library (ie dll/so) boundaries. The std library is often header-only. Which is great for doing some awesome optimizations. However, for dll's, they are often built with different compiler settings that may impact the internal structure/code of a std library containers. For example, in MSVC one dll may build with iterator debugging on while another builds with it off. These two dlls may run into issues passing std containers around. If I expose std::string in my interface, I can't guarantee the code the client is using for std::string is an exact match of my library's std::string. This leads to hard to debug problems, headaches, etc. You either rigidly control the compiler settings in your organization to prevent these issues or you use a simpler C interface that won't have these problems. Or specify to your clients the expected compiler settings they should use (which sucks if another library specifies other compiler settings). My question is whether or not C++11 tried to do anything to solve these issues?

    Read the article

  • Best way: restructure an existing Team Foundation Server (TFS) solution

    - by dhh
    In my department we are developing several smaller AddOns for some unified communication server. For versioning and distributed development we use a Team Foundation Server 2012. But: there is only one large TFS solution for all of our applications and libraries: Main Solution Applications App 1 App 2 App 3 Externals Libraries Lib 1 Lib 2 Tools The "Application" path contains all main applications. Those are not depending on each other, but they depend on the Libraries and Externals projects. The "Externals" path contains some external DLLs referenced in our Applications and Libraries. The Libraries path contains commonly used libs (UI templates, Helper classes, etc.). They do not depend on each other and they are referenced in the Libraries and the Tools projects. The Tools path contains some helper programs like setup helpers, update web services, etc. Now, there's some major points why I'd like to change this structure: We can't use server builds. It's uncomfortable to manage TFS scrum management with sprints, impediments, etc. with a solution structure like that. Every developer always has access to all projects in the solution. A complete build lasts too long if one accidentally hits [F6] in Visual Studio... What would you change in this solution? How would you break those projects into smaller Solutions, how should those solutions be structured. My first approach would be, to create one TFS project for each Application, Library and Tool. But how can I ensure that e.g. App 2 always contains the newest version of Lib 1? Do I have to monitor changes on Lib 1 and update App 2 manually as soon as the Lib changes? Or can I somehow force Visual Studio to always use the newest version of an external project somehow?

    Read the article

  • Is this the correct approach to an OOP design structure in php?

    - by Silver89
    I'm converting a procedural based site to an OOP design to allow more easily manageable code in the future and so far have created the following structure: /classes /templates index.php With these classes: ConnectDB Games System User User -Moderator User -Administrator In the index.php file I have code that detects if any $_GET values are posted to determine on which page content to build (it's early so there's only one example and no default): function __autoload($className) { require "classes/".strtolower($className).".class.php"; } $db = new Connect; $db->connect(); $user = new User(); if(isset($_GET['gameId'])) { System::buildGame($gameId); } This then runs the BuildGame function in the system class which looks like the following and then uses gets in the Game Class to return values, such as $game->getTitle() in the template file template/play.php: function buildGame($gameId){ $game = new Game($gameId); $game->setRatio(900, 600); require 'templates/play.php'; } I also have .htaccess so that actual game page url works instead of passing the parameters to index.php Are there any major errors of how I'm setting this up or do I have the general idea of OOP correct?

    Read the article

  • How to setup a virtual machine in Ubuntu desktop to run Debian Server

    - by stickman
    I want to run a virtual machine in my Ubuntu desktop that runs a Debian server. The purpose of this is to generate Debian packages. I have some C++ applications that were originally developed on my Ubuntu machine, and I need to (re)compile them on a Debian server in order to: build Deb packages for deployment on a Debian server make sure that the applications will definitely work on a debian server The idea is so that I can do 90% of my development on Ubuntu (where I am more comfortable), and deploy a binary package that definitely works on Debian. BTW, I am developing on Karmic Kola (Ubuntu 9.10).

    Read the article

  • If I send an IPA over TestFlight, can it be used to deploy to the app store?

    - by Reid Belton
    I am currently working for a small startup. I was previously under contract, now I am working for equity (no pay). The thing is, there is not yet a signed agreement in place as the details are being worked out. I may finish development before the contract is ready. I'm not currently under any contract or agreement, so the other party doesn't have any legal claim (that I know of) to the code I'm writing now, other than NDA (which just precludes me from cutting him out and releasing on my own). He already has the old code that I wrote under contract. I've made it clear to the other party that I won't submit the app or turn over the code until there's something signed to protect my interests. I've stopped pushing commits to the company repo (I'm now the only developer actively working on the project). However, I would still like to send builds over TestFlight for feedback and testing purposes. The other party has access to the developer portal and iTunes Connect for code signing, etc. Things are amicable and I don't foresee getting burnt on this, but I'm not going to put myself in that position. My concern is that if I send a finished build via TestFlight, it could be extracted and submitted to the app store without my participation. They wouldn't have the source for future maintenance and updates, of course, but it could be reverse-engineered by another developer later working from the old code base. Is this technically feasible at all? If so, is there a way I can send builds for testing while protecting my interests?

    Read the article

  • What is the value of checking in failing unit tests?

    - by Adam W.
    While there are ways of keeping unit tests from being executed, what is the value of checking in failing unit tests? I will use a simple example: Case Sensitivity. The current code is case sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests have found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • Problems loading Hilva tutorials

    - by Beska
    I'm a newcomer to XNA, and I'm evaluating some libraries. The Hilva Graphics Engine looks interesting, and I'm trying to run their tutorials. However, all of them give me errors. For example, if I download the ParallaxMappingSample demo, and try to build it, I get Error 1 Error loading pipeline assembly "C:\Users\Me\Desktop\ParallaxMappingSample\Hilva.Content.dll". ParallaxMappingSample I get similar errors for all of the samples. Unfortunately, this error isn't very enlightening. I can see the Hilva.Content.dll in the appropriate directory. I tried removing and readding the reference from the content project, but I get the same error. I'm not sure it's relevant, but I'm on Windows 7, I'm using Microsoft Visual Studio 2010, and XNA 4.0. Is there an easy (or difficult) solution? EDIT: If you happen to try this, even if you don't have a solution, let me know about it in a comment. Whether it works for you, or if you get the same problem...either result would be something that might let me know if it's just a problem with the tutorial, or if it's on my end.

    Read the article

  • How do I make a Minecraft kiosk for portable USB drive that boots on most computers

    - by user2044589
    Some time ago, someone referred me to a cool website called Rapid Rollout. It worked fine until I tried to install an OS onto a netbook. To put it short, it didn't work as well as I expected it to. It also didn't install USB flash drives. I'm trying to build a system (or use a service that would create a system) that would open up the Minecraft Launcher (jar) and show it in full-screen with no background. It would also all have to fit into 8 Gigabytes (as this is the most that I can use right now). How can I accomplish this?

    Read the article

  • RS-232 vs. RS-485

    - by user60524
    Doing a little research on the two to figure out which one may better suit my purposes (communications amongst different hardware). How do they fare against one another? Im far from being a specialist and have no idea where I would even start looking for data to compare and contrast. If possible can someone please answer the following questions in regards to each of these. Can they be networked amongst each other? Can they be easily networked over ethernet? What speeds do they transfer at? (Min, Max, Etc.) Reliability? Best framework to build on top of to support the above? Any standard communications programs? Debugging capability? Any help would be very much appreciated, thanks.

    Read the article

  • Artists and music - i need help to decide wich cms to use.

    - by infty
    A friend has asked me to build a site with the following options: staff members must be able to add new music and artists to the page a gallery must be provided - it is also good if each artist has the ability to have his/her own, smaller, gallery users must be able to vote for artists users must be able to alter in discussions (forums or comments sections) staff members must be able to blog staff members must be able to write articles I did a small project where i actually implementet all of these features, but i want to use an existing content managment system for all of these features so that future devolpers can, hopefully, more easy extend the website. And also, so that i dont have to provide to much documentation. I have never developed a website using an external cms like drupal or wordpress and after seeing hours of tutorial videos of both systems, i still cant make up my mind on wheter i should : a) use Drupal 7 b) use Wordpress 3 c) create my own cms I can only imagine that staff members would also want to create content using iphone or android based mobile devices. But this is not a required feature. Can someone, with experience, please tell me about their experiences with bigger projects like this? The site will approx. have a total of 400 000 - 500 000 visitors (not daily visitors, based on numbers from last year in a period of 4 months)

    Read the article

  • Remote execution in Workgroup network

    - by ayyob khademi
    Consider this scenario: Please don't say that it would be better if I created a Domain network; Just consider this scenario. 10 PCs are all interconnected via a switch to a workgroup network named WORKGROUP; PCs specs(all are the same): Windows XP SP2 en (build:2600.xpsp_sp2_rtm.040803-2158) I have full physical control over my own PC (one of those 10 PCs) and what I know about the other ones: IPs of all 10 PCs. Administrator account name of all 10 PCs. Administrator account password of all 10 PCs. How can I execute an application on the other PCs???(without touching them) How can modify their registry settings???(without touching them)

    Read the article

  • if exist !SOMEPATH! not working in batch file

    - by akash
    I have a batch script in which i am using multiple if exist statement, the problem is all statements are working except one . Following variables are set SETLOCAL ENABLEDELAYEDEXPANSION SET basedrive=E: SET tfworkspace=!basedrive!\TFS SET envdefault=%1 SET projenv=!envdefault! echo subapp=!subapp! subappservice=!subappservice! SET tfworkspacepath=!tfworkspace!\!releasebranch!\!app!\!subapp! SET tfworkspacepathservice=!tfworkspace!\!releasebranch!\!app!\!subapp!\sourcecode\build\!projenv! This statement works, if exist "!tfworkspacepath!" (robocopy "!tfworkspacepath!"\sourcecode\messagebroker\ /E /NFL /NJS /NDL /ETA "!basedir!\!messagebroker!" ) else SET /a foldererror=1 This statement doesn't work, by does not work i mean even thou the path does not exist it it still tries to robocopy. if exist !tfworkspacepathservice! ( robocopy !tfworkspacepathservice! /E /NFL /NJS /NDL /ETA "!basedir!\!scripts!") else SET /a foldererror =!foldererror!+1 I am new to batch writing, please guide me

    Read the article

< Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >