Search Results

Search found 23404 results on 937 pages for 'script compression'.

Page 658/937 | < Previous Page | 654 655 656 657 658 659 660 661 662 663 664 665  | Next Page >

  • Portable scripting language for a multi-server admin?

    - by Aaron
    Please Note: Portable as in portableapps.com, not the traditional definition. Originally posted on stackoverflow.com, asking here at another user's suggestion. I'm a DBA and sysadmin, mostly for Windows machines running SQL Server. I'm looking for a programming/scripting language for Windows that doesn't require Admin access or an installer, needing no install process other than expanding it into a folder. My intent is to have a language for automation on which I can standardize. Up to this point, I've been using a combination of batch files and Unix shell, using sh.exe from UnxUtils but it's far from a perfect solution. I've evaluated a handful of options, all of them have at least one serious shortcoming or another. I have a strong preference for something open source or dual license, but I'm more interested in finding the right tool than anything else. Not interested that anything that relies on Cygwin or Java, but at this point I'd be fine with something that needs .NET. Requirements: Manageable footprint (1-100 files, under 30 MB installed) Run on Windows XP and Server (2003+) No installer (exe, msi) Works with external pipes, processes, and files Support for MS SQL Server or ODBC connections Bonus Points: Open Source FFI for calling functions in native DLLs GUI support (native or gtk, wx, fltk, etc) Linux, AIX, and/or OS X support Dynamic, object oriented and/or functional, interpreted or bytecode compiled; interactive development Able to package or compile scripts into executables So far I've tried: Ruby: 148 MB on disk, 23000 files Portable Python: 54 MB on disk, 2800 files Strawberry Perl: 123 MB on disk, 3600 files REBOL: Great, except closed source and no MSSQL or ODBC in free version Squeak Smalltalk: Great, except poor support for scripting ---- cut: points of clarification ---- Why all the limitations? I realize some of my criteria seem arbitrarily confining. It's primarily a product my environment. I work as a SQL Server DBA and backup Unix admin at a division of a large company. In addition to near a hundred boxes running some version or another of SQL Server on Windows, I also support the SQL Server Express Edition installs on over a thousand machines in the field. Because of our security policies, I don't login rights on every machine. Often enough, an issue comes up and I'm given local Admin for some period of time. Often enough, it's some box I've never touched and don't have my own environment setup yet. I may have temporary admin rights on the box, but I'm not the admin for the machine- I'm just the DBA. I've no interest in stepping on the toes of the Windows admins, nor do I want to take over any of their duties. If I bring up "installing" something, suddenly it becomes a matter of interest for Production Control and the Windows admins; if I'm copying up a script, no one minds. The distinction may not mean much to the readers, but if someone gets the wrong idea I've suddenly got a long wait and significant overhead before I can get the tool installed and get the problem solved. That's why I want something that can be copied and run in the manner of a portable app. What about the small footprint? My company has three divisions, each in a different geographical location, and one of them is a new acquisition. We have different production control/security policies in each division. I support our MSSQL databases in all three divisions. The field machines are spread around the US, sometimes connecting to the VPN over very slow links. Installing Ruby \using psexec has taken a long time over these connections. In these instances, the bigger time waster seems to be archives with thousands and thousands of files rather than their sheer size. You could say I'm spoiled by Unix, where the admins usually have at least some modern scripting language installed; I'd use PowerShell, but I don't know it well and more importantly it isn't everywhere I need to work. It's a regular occurrence that I need to write, deploy and execute some script on short notice on some machine I've never on which logged in. Since having Ruby or something similar installed on every machine I'll ever need to touch is effectively impossible because of the approvals, time and and Windows admin labor needed I makes more sense find a solution that allows me to work on my own terms.

    Read the article

  • Can't make updates with LDAP from Linux box to Windows AD

    - by amburnside
    I have a webapp (built using Zend Framework - PHP) that runs on a Linux environment which needs to authenticate against Active Directory on a Windows server. So far my webapp can authenticate with LDAPS, but cannot perform any kind of write operation (add/update/delete). It can only read. I have configured my server as follows: I have exported the CA Certificate from my Windows AD server to /etc/opendldap/certs I have created a pem file based on this certificate using openssl I have update /etc/openldap/ldap.conf so that it knows where to look for the pem certificate: TLS_CACERT /etc/openldap/certs/xyz.internal.pem When I run my script, I get the following error: 0x35 (Server is unwilling to perform; 0000209A: SvcErr: DSID-031A1021, problem 5003 (WILL_NOT_PERFORM), data 0 ): Have I missed something with my configuration, which is causing the server to reject making updates to AD?

    Read the article

  • How to restart RoR services after server has been rebooted

    - by Alan DeLonga
    Update I have been searching around to see what services would possibly need to be restarted in my project after reboot. One of them was thinking sphinx, which I finally got to the point where it logs: [Fri Nov 16 19:34:29.820 2012] [29623] accepting connections But I still cant run searchd or searchd --stop because there was no generated sphinx.conf file in the etc/sphinxsearch for more info refer to this open thread on thinking_sphinx after reboot I then turned to looking into restarting unicorn or thin based on some insight I got. The issue is when I check my gems I see one for thin AND unicorn. But when I try to start either one of them they have no file residing in etc/init.d/ where the nginx and sphinxsearch files reside... Would rebooting totally erase the files for an app server like thin or unicorn? We are hosted on Rackspace running ruby 1.9.2p290 rails (3.2.8, 3.2.7, 3.2.0) nginx/1.1.19 notice that there are gems for unicorn and thin but there is no unicorn.rb or thin.rb in my config folder for my app... I am still super lost if any one can give me some insight on some steps to take to figure this out I would really appreciate it. Anything would help, thanks for reading. thin 1.4.1 unicorn 4.3.1 When I run unicorn I get the same issue as referenced here : > /usr/local/bin/unicorn start /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/configurator.rb:610:in `parse_rackup_file': rackup file (start) not readable (ArgumentError) from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/configurator.rb:76:in `reload' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/configurator.rb:67:in `initialize' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:104:in `new' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:104:in `initialize' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/bin/unicorn:121:in `new' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/bin/unicorn:121:in `<top (required)>' from /usr/local/bin/unicorn:19:in `load' from /usr/local/bin/unicorn:19:in `<main>' When I run thin it just opens a command line prompt... /usr/local/bin/thin start >> Using rack adapter Other gems: * LOCAL GEMS * actionmailer (3.2.8, 3.2.7, 3.2.0) actionpack (3.2.8, 3.2.7, 3.2.0) activemodel (3.2.8, 3.2.7, 3.2.0) activerecord (3.2.8, 3.2.7, 3.2.0) activeresource (3.2.8, 3.2.7, 3.2.0) activesupport (3.2.8, 3.2.7, 3.2.0) arel (3.0.2) builder (3.0.0) bundler (1.1.5) carmen (1.0.0.beta2) carmen-rails (1.0.0.beta3) cocaine (0.2.1) coffee-rails (3.2.2) coffee-script (2.2.0) coffee-script-source (1.3.3) daemons (1.1.9) erubis (2.7.0) eventmachine (0.12.10) execjs (1.4.0) faraday (0.8.4) faraday_middleware (0.8.8) foursquare2 (1.8.2) geokit (1.6.5) hashie (1.2.0) hike (1.2.1) httparty (0.8.3) httpauth (0.1) i18n (0.6.0) journey (1.0.4) jquery-rails (2.0.2) json (1.7.4, 1.7.3) jwt (0.1.5) kgio (2.7.4) lastfm (1.8.0) libv8 (3.3.10.4 x86_64-linux) mail (2.4.4) mime-types (1.19, 1.18) minitest (1.6.0) multi_json (1.3.6) multi_xml (0.5.1) multipart-post (1.1.5) mysql2 (0.3.11) oauth2 (0.8.0) paperclip (3.1.1) polyglot (0.3.3) rack (1.4.1) rack-cache (1.2) rack-ssl (1.3.2) rack-test (0.6.1) rails (3.2.8, 3.2.7, 3.2.0) railties (3.2.8, 3.2.7, 3.2.0) raindrops (0.10.0, 0.9.0) rake (0.9.2.2, 0.8.7) rdoc (3.12, 2.5.8) riddle (1.5.3) sass (3.2.0, 3.1.19) sass-rails (3.2.5) sprockets (2.1.3) sqlite3 (1.3.6) sqlite3-ruby (1.3.3) therubyracer (0.10.2, 0.10.1) thin (1.4.1) thinking-sphinx (2.0.10) thor (0.16.0, 0.15.4, 0.14.6) tilt (1.3.3) treetop (1.4.10) tzinfo (0.3.33) uglifier (1.2.7, 1.2.4) unicorn (4.3.1) xml-simple (1.1.1) I am working on a project that was built by another group. I made some modifications to a constants file in the config folder (changing some values for arrays that populated some drop down fields), but the app had to be rebooted before those changes would be recognized. The hosting is through Rackspace, we rebooted through the option on their site. I contacted them and checked the status of our server, the port is open and operational. The problem is the app is not running when you go to the address for the site. Then when I put in the ip address of the server it just says "Welcome to Nginx". But in a log files I see: [Thu Nov 15 02:34:37.945 2012] [15916] caught SIGTERM, shutting down [Thu Nov 15 02:34:37.996 2012] [15916] shutdown complete I am not very versed in server side set up. I have also never worked on a Rails project that had to have specific services started before the application will start. Any insight as to how to figure out what services need to be restarted and how to go about restarting them would be greatly appreciated. I feel kind of dead in the water at this point... Thanks, Alan

    Read the article

  • Enigmail - how to encrypt only part of the message?

    - by Lukasz Zaroda
    When I confirmed my OpenPGP key on launchpad I got a mail from them, that was only partially encrypted with my key (only few paragraphs inside the message). Is it possible to encrypt only chosen part of the message with Enigmail? Or what would be the easiest way to accomplish it? Added #1: I found a pretty convenient way for producing ASCII armoured encrypted messages by using Nautilus interface (useful for ones that for some reason doesn't like to work with terminal). You need to install Nautilus-Actions Configuration Tool, and add there a script with a name eg. "Encrypt in ASCII" and parameters: path: gpg parameters: --batch -sear %x %f The trick is that now you can create file, with extension that would be name of your recipient, you can then fill it with your message, right click it in Nautilus, choose "Encrypt in ASCII", and you will have encrypted ascii file which content you can (probably) just copy to your message. But if anybody knows more convenient solution please share it. Added #1B: In the above case if you care more about security of your messages, It's worth to turning off invisible backup files that gedit creates every time, you create new document, or just remember to delete them.

    Read the article

  • what might cause a print error in perl?

    - by Mark J Seger
    I have a long running script that every hour opens a file, prints to it and closes the file. I've recently found very rarely, the print is failing, not because I'm testing the status of the print itself but rather due to the fact of missing entries in the file until the system is actually rebooted! I do trap for file open failures and write a message to syslog when that happens and I'm not seeing any open failures so I'm now guessing it may be the print that is failing. I'm not trapping the print failures, which I suspect most people don't but am now going to update that one print. Meanwhile, my question is does anyone know what types of situations could cause a print statement to fail when there is plenty of disk storage and no contention for a file which has been successfully opened in append mode?

    Read the article

  • php.ini use multiple include paths - openbasedir restriction

    - by hfidgen
    I need to allow an include path for a vhost subdomain on Plesk 10. I've edited the PHP PEAR path into /etc/php.ini as I'm happy for it to be globally available: include_path = ".:/usr/share/pear/" This works insofar as PHP is able to see the files in that directory when a script tries to include them, but I'm getting the dreaded openbasedir error: Warning: require_once() [function.require-once]: open_basedir restriction in effect. File(/usr/share/pear/xxxx.php) is not within the allowed path(s): (/var/www/vhosts/xxxx.com/subdomains/test/httpdocs/:/tmp/) Am I right in saying that the subdomain or main domain can have a vhost.conf file in which I can alter the openbasedir allowed paths? I've tried searching out solutions but I'm afraid I can't quite see one yet :)

    Read the article

  • Remote Yum mirror

    - by specto
    I have a bunch of remote computers that must be updated to the most recent packages for RedHat 4 and RedHat 5. I am using mrepo to mirror the RHN packages, however the remote computers do not have an internet connection. Because of this I have to update the mirror server that is part of the remote computers with a dvd. This is to cut down shipping costs to just a dvd. I am attempting to script this so I can fit all of the new packages on a CD or a DVD. I send updates about once or twice a month depending on package requirements. So my question is, is their a good method to do this so that the only things transferred are the new packages? I wish I could just use rsync. Thanks.

    Read the article

  • How to associate the ".exe" extension to be opened with Mono?

    - by wuser
    I want to asssociate the .exe file extension to Mono (don't care about wine). Apparently, when using Finder's GUI, only .app files (application bundles) can be selected. But the Mono executable (/Libraries/Frameworks/Mono.Framework/Current/bin/mono) is no such bundle. I tried some AppleScript on run this_file do shell script "mono this_file &" end run but Finder's GUI still doesn't allow to associate that with .exe's. How to associate a specific file extension to a command-line application in Mac OS X?

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Jumping with Mecanim synchronization

    - by Abhishek Deb
    I am using Unity3D 4.1 for one of my projects. I have a robot character which is always running. I am using mecanim animation system. What I really want:When I press Space bar, the character should jump up in the air, triggering an animation clip and then by the time it reaches the ground, the animation clip should also end. What actually is happening:When I press Space bar, the character jumps in the air. Animation clip plays as it should, but ends way before it reaches the ground. So, it looks like he is running in the mid air. What have I done: I have this humanoid robot setup with a jump animation bounded with the space bar key. Also, instead of using root motion, I am directly moving the robot from code. //Jumping if(Input.GetKeyDown(KeyCode.Space)){ rigidbody.AddForce(Vector3.up*jumpVelocty); anim.SetBool("Jump",true); } else anim.SetBool("Jump",false); Character's Details: Rigidbody = Mass:30, Freeze rotaion:x,y,z Capsule Collider = Material: metal, center(0,4.5,0), radius:1, height:11 Script = jumpVelocity:20000 Jump Animation Clip: ~ 2 seconds. I am really out of ideas how to synchronize everything. Should I make the character jump in some other way so that it quickly comes down and touches the ground to match the animation clip? If yes, please provide a direction.

    Read the article

  • In gnuplot, how to plot with lines but skip missing data points?

    - by Anna
    I've got a value associated to each day, as such: 120530 70.1 120531 69.0 120601 69.2 120602 69.5 # and so on for 200 lines When plotting this data in gnuplot with lines, the data points are nicely connected. Unfortunately, at places over a week of data points can be missing. Gnuplot draws long lines over these intervals. How can I make gnuplot only connect points on consecutive days? Solutions that require preprocessing of the data are fine, as I already smooth it with a script. Here is what I use: set xdata time set timefmt "%y%m%d" plot "vikt_ma.txt" using 1:2 with lines title "first line", \\ "" using 1:3 with lines title "second line" Example:

    Read the article

  • PowerShell & SQL Compare

    - by Grant Fritchey
    Just a quick blog post to share a couple of scripts for using PowerShell to call SQL Compare. This is an example from my session at SQL in the City on setting up a sandbox development process. This just runs a compare between a set of scripts and a database and deploys it. set-Location “c:\Program Files (x86)\Red Gate\SQL Compare 10\”; ./sqlcompare /s2:DOJO /db2:MovieManagement_Sandbox /sourcecontrol1 /vu1:grant /vp1:12345 /r1:HEAD /sfx:scripts.xml /sync /mfx:migrations.xml /verbose; I would not recommend using the /verbose output for real automation, but I’m showing off how the tool works. This particular script does a compare straight from source control to a database on my server. You can use variables where I’ve hard coded. That’s it. Works great. Just wanted to share it out there. I have others that I’ll track down and put up here.  

    Read the article

  • Log rotation with automatic *.log file discovery

    - by Mikko Ohtamaa
    I am hosting several websites which each of run their own Python process and write *.log output files, but the directory structure is not standardized. Example: -rw-r--r-- 1 plone plone 125M 2012-08-29 11:35 ./x/var/log/instance-Z2.log -rw-r--r-- 1 plone plone 19M 2012-08-29 00:07 ./zope2.9/y/log/event.log -rw-r--r-- 1 plone plone 188M 2012-08-13 00:09 ./zope2.9/y/log/Z2.log -rw-r--r-- 1 plone plone 137M 2010-11-16 09:41 ./zope2.9/y/log/event.log I'd like to make log rotate autodiscovery these log files and run a log rotation on them, as opposite to manually type in every log file to logrotate conf. Does any existing tools offer this kind of log file discovery and rotation capabilities, without manually specifying each file? If not... then just write a shell script which generates the logrotate conf?

    Read the article

  • dhclient configures /etc/resolv.conf with invalid entry

    - by kubal5003
    I'm trying to figure out why running dhclient on my interface sets /etc/resolv conf to the ip number of my gateway(router). This entry is invalid and each and every time causes inability to resolve any address. I would like to: stop dhclient from overwriting the /etc/resolv.conf or make dhclient write there the valid dns ip from my router More on the environment: I'm using virtual Debian Wheezy as a client system on Windows Seven x64. It is run by Virtualbox with networking mode set to bridged (all packets from debian are injected to my network interface on windows). If I manually configure the /etc/resolv.conf then everything works fine. Doing this on every boot is quite annoying.. PS I know I can write a script to do it for me, but this is not the solution I want. //edit router ip: 192.168.1.100 /etc/resolv.conf AFTER running dhclient eth0: "nameserver 192.168.1.100" what I would like the /etc/resolv.conf to look like: "nameserver 89.202.xxxx" (I don't have to provide the real ip do I? )

    Read the article

  • Day 3 - XNA: Hacking around with images

    - by dapostolov
    Yay! Today I'm going to get into some code! My mind has been on this all day! I find it amusing how I practice, daily, to be "in the moment" or "present" and the excitement and anticipation of this project seems to snatch it away from me frequently. WELL!!! (Shakes Excitedly) Let's do this =)! Let's code! For these next few days it is my intention to better understand image rendering using XNA; after said prototypes are complete I should (fingers crossed) be able to dive into my game code using the design document I hammered out the other night. On a personal note, I think the toughest thing right now is finding the time to do this project. Each night, after my little ones go to bed I can only really afford a couple hours of work on this project. However, I hope to utilise this time as best as I can because this is the first time in a while I've found a project that I've been passionate about. A friend recently asked me if I intend to go 3D or extend the game design. Yes. For now I'm keeping it simple. Lastly, just as a note, as I was doing some further research into image rendering this morning I came across some other XNA content and lessons learned. I believe this content could have probably been posted in the first couple of posts, however, I will share the new content as I learn it at the end of each day. Maybe I'll take some time later to fix the posts but for now Installation and Deployment - Lessons Learned I had installed the XNA studio  (Day 1) and the site instructions were pretty easy to follow. However, I had a small difficulty with my development environment. You see, I run a virtual desktop development environment. Even though I was able to code and compile all the tutorials the game failed to run...because I lacked a 3D capable card; it was not detected on the virtual box... First Lesson: The XNA runtime needs to "see" the 3D card! No sweat, Il copied the files over to my parent box and executed the program. ERROR. Hmm... Second Lesson (which I should have probably known but I let the excitement get the better of me): you need the XNA runtime on the client PC to run the game, oh, and don't forget the .Net Runtime! Sprite, it ain't just a Soft Drink... With these prototypes I intend to understand and perform the following tasks. learn game development terminology how to place and position (rotate) a static image on the screen how to layer static images on the screen understand image scaling can we reuse images? understand how framerate is handled in XNA how to display text , basic shapes, and colors on the screen how to interact with an image (collision of user input?) how to animate an image and understand basic animation techniques how to detect colliding images or screen edges how to manipulate the image, lets say colors, stretching how to focus on a segment of an image...like only displaying a frame on a film reel what's the best way to manage images (compression, storage, location, prevent artwork theft, etc.) Well, let's start with this "prototype" task list for now...Today, let's get an image on the screen and maybe I can mark a few of the tasks as completed... C# Prototype1 New Visual Studio Project Select the XNA Game Studio 3.1 Project Type Select the Windows Game 3.1 Template Type Prototype1 in the Name textbox provided Press OK. At this point code has auto-magically been created. Feel free to press the F5 key to run your first XNA program. You should have a blue screen infront of you. Without getting into the nitty gritty right, the code that was generated basically creates some basic code to clear the window content with the lovely CornFlowerBlue color. Something to notice, when you move your mouse into the window...nothing. ooooo spoooky. Let's put an image on that screen! Step A - Get an Image into the solution Under "Content" in your Solution Explorer, right click and add a new folder and name it "Sprites". Copy a small image in there; I copied a "Royalty Free" wizard hat from a quick google search and named it wizards_hat.jpg (rightfully so!) Step B - Add the sprite and position fields Now, open/edit  Game1.cs Locate the following line:  SpriteBatch spriteBatch; Under this line type the following:         SpriteBatch spriteBatch; // the line you are looking for...         Texture2D sprite;         Vector2 position; Step C - Load the image asset Locate the "Load Content" Method and duplicate the following:             protected override void LoadContent()         {             spriteBatch = new SpriteBatch(GraphicsDevice);             // your image name goes here...             sprite = Content.Load<Texture2D>("Sprites\\wizards_hat");             position = new Vector2(200, 100);             base.LoadContent();         } Step D - Draw the image Locate the "Draw" Method and duplicate the following:        protected override void Draw(GameTime gameTime)         {             GraphicsDevice.Clear(Color.CornflowerBlue);             spriteBatch.Begin(SpriteBlendMode.AlphaBlend);             spriteBatch.Draw(sprite, position, Color.White);             spriteBatch.End();             base.Draw(gameTime);         }  Step E - Compile and Run Engage! (F5) - Debug! Your image should now display on a cornflowerblue window about 200 pixels from the left and 100 pixels from the top. Awesome! =) Pretty cool how we only coded a few lines to display an image, but believe me, there is plenty going on behind the scenes. However, for now, I'm going to call it a night here. Blogging all this progress certainly takes time... However, tomorrow night I'm going to detail what we just did, plus start checking off points on that list! I'm wondering right now if I should add pictures / code to this post...let me know if you want them =) Best Regards, D.

    Read the article

  • changing timezone with dpkg-reconfigure tzdata and debconf-set-selections

    - by Carlos Campderrós
    I want to set up a script that automatically changes the timezone on a machine (running ubuntu 11.10) and also sets the right values to the debconf database. I've tried the following, but it does not work (at the end, the current timezone remains unchanged, and if I run manually the dpkg-reconfigure tzdata command, the selected values are indeed the old ones): #!/bin/sh -e echo "tzdata tzdata/Areas select Europe" | debconf-set-selections echo "tzdata tzdata/Zones/Europe select Madrid" | debconf-set-selections echo "tzdata tzdata/Zones/America select " | debconf-set-selections dpkg-reconfigure -f noninteractive tzdata So, by now, I'm doing it messing with the files /etc/localtime and /etc/timezone directly, but I'd rather prefer the dpkg-reconfigure and debconf way.

    Read the article

  • How can I copy from one domain pc (winxp) logged off, to another domain server (w2k3)

    - by user37408
    Hi, I have a automation build server which creates nightly builds. It does this while logged off in Windows XP. This is is one domain while the server I wish to copy the builds to is in another domain (win2k3). I can't use a network share when logged off and as soon as I try to browse manually to the server it prompts for a username/password I am guessing the only way is to create a script/batch file which has a domain account and password for the server and runs at a scheduled time. If there is a more elegant way, please let me know. thanks

    Read the article

  • Importing email from other accounts in Gmail

    - by reyjavikvi
    I have a Gmail account that imports (via POP) messages from an old Hotmail account that people still send mail to. I normally read my mail with Thunderbird, not the web interface. The thing is that Gmail imports the mails once per hour (approx), and sometimes I want to know if right now there are messages in my old account. I could just go and hit the "import now" button, but whatever. Is there someplace in the settings where I can change it so the importing happens once each, say, 15 minutes? Alternatively, is it possible to make a script that will tell Gmail to import the messages at that moment, just like it would if I went to the Settings and clicked "Check if I have mail now". I've heard about libgmail for Python, but it doesn't seem like it can do that.

    Read the article

  • Best practices: Ajax and server side scripting with stored procedures

    - by Luka Milani
    I need to rebuild an old huge website and probably to port everyting to ASP.NET and jQuery and I would like to ask for some suggestion and tips. Actually the website uses: Ajax (client site with prototype.js) ASP (vb script server side) SQL Server 2005 IIS 7 as web server This website uses hundred of stored procedures and the requests are made by an ajax call and only 1 ASP page that contain an huge select case Shortly an example: JAVASCRIPT + PROTOTYPE: var data = { action: 'NEWS', callback: 'doNews', param1: $('text_example').value, ......: ..........}; AjaxGet(data); // perform a call using another function + prototype SERVER SIDE ASP: <% ...... select case request("Action") case "NEWS" With cmmDB .ActiveConnection = Conn .CommandText = "sp_NEWS_TO_CALL_for_example" .CommandType = adCmdStoredProc Set par0DB = .CreateParameter("Param1", adVarchar, adParamInput,6) Set par1DB = .CreateParameter(".....", adInteger, adParamInput) ' ........ ' can be more parameters .Parameters.Append par0DB .Parameters.Append par1DB par0DB.Value = request("Param1") par1DB.Value = request(".....") set rs=cmmDB.execute RecodsetToJSON rs, jsa ' create JSON response using a sub End With .... %> So as you can see I have an ASP page that has a lot of CASE and this page answers to all the ajax request in the site. My question are: Instead of having many CASES is it possible to create dynamic vb code that parses the ajax request and creates dynamically the call to the desired SP (also implementing the parameters passed by JS)? What is the best approach to handle situations like this, by using the advantages of .Net + protoype or jQuery? How the big sites handle situation like this? Do they do it by creating 1 page for request? Thanks in advance for suggestion, direction and tips.

    Read the article

  • My php homepage downloads index.php instead of being processed on Gandi.net

    - by alekone
    if I go to the homepage of my website http://www.website.com (on a brand new server) the index.php gets donwloaded instead than processed. I don't have the same problem on other folders. my .htaccess reads: AddHandler php5-script .php what could this be? I suspect it's something with php config / or htaccess, but I'm not able to figure it out. help please! edit: I don't know if this helps: it's a wordpress installation, I have this problem only on the public part of the website, not on the admin (that renders correctly)

    Read the article

  • Netcat I/O enhancements

    - by user13277689
    When Netcat integrated into OpenSolaris it was already clear that there will be couple of enhancements needed. The biggest set of the changes made after Solaris 11 Express was released brings various I/O enhancements to netcat shipped with Solaris 11. Also, since Solaris 11, the netcat package is installed by default in all distribution forms (live CD, text install, ...). Now, let's take a look at the new functionality: /usr/bin/netcat alternative program name (symlink) -b bufsize I/O buffer size -E use exclusive bind for the listening socket -e program program to execute -F no network close upon EOF on stdin -i timeout extension of timeout specification -L timeout linger on close timeout -l -p port addr previously not allowed usage -m byte_count Quit after receiving byte_count bytes -N file pattern for UDP scanning -I bufsize size of input socket buffer -O bufsize size of output socket buffer -R redir_spec port redirection addr/port[/{tcp,udp}] syntax of redir_spec -Z bypass zone boundaries -q timeout timeout after EOF on stdin Obviously, the Swiss army knife of networking tools just got a bit thicker. While by themselves the options are pretty self explanatory, their combination together with other options, context of use or boundary values of option arguments make it possible to construct small but powerful tools. For example: the port redirector allows to convert TCP stream to UDP datagrams. the buffer size specification makes it possible to send one byte TCP segments or to produce IP fragments easily. the socket linger option can be used to produce TCP RST segments by setting the timeout to 0 execute option makes it possible to simulate TCP/UDP servers or clients with shell/python/Perl/whatever script etc. If you find some other helpful ways use please share via comments. Manual page nc(1) contains more details, along with examples on how to use some of these new options.

    Read the article

  • How do I use novnc on CentOS 6?

    - by hyphen this
    I came across novnc in a yum search, and wanted to use it so I installed. However, there is no information on how to actually use it. The novnc_server command exits with "Could not find vnc.html". The man page and --help menu are of no help. The README on github says: "Use the launch script to start a mini-webserver and the WebSockets proxy (websockify)." Which is also, of no help. The fedora and CentOS wiki have no info.

    Read the article

  • How to find the computer name a user logged on to

    - by V. Romanov
    Hi guys Is there a tool or script or some other way of knowing what computer name a specific user is currently logged on to? Or even was logged on to? Say the user "HRDrone" is working on his machine whose hostname is "HRStation01". I, sitting at my sysadmin desk, only know that the username is "HRDrone". Any way i can find out that he is logged on to "HRStation01" without asking the user? AD event viewer? anything? Thanks!

    Read the article

  • Powershell variables to string

    - by Mike Koerner
    I'm new to powershell. I'm trying to write an error handler to wrap around my script.  Part of the error handler is dumping out some variable settings.  I spent a while trying to do this and couldn't google a complete solution so I thought I'd post something. I want to display the $myinvocation variable. In powershell you can do this PS C:\> $myInvocation for my purpose I want to create a stringbuilder object and append the $myinvocation info.  I tried this $sbOut = new-object System.Text.Stringbuilder $sbOut.appendLine($myinvocation) $sbOut.ToString() This produces                                    Capacity                                MaxCapacity                                     Length                                    --------                                -----------                                     ------                                          86                                 2147483647                                         45 System.Management.Automation.InvocationInfo This is not what I wanted so I tried $sbOut.appendLine(($myinvocation|format-list *)) This produced                                    Capacity                                MaxCapacity                                     Length                                    --------                                -----------                                     ------                                         606                                 2147483647                                        305 Microsoft.PowerShell.Commands.Internal.Format.FormatStartData Microsoft.PowerShell.Commands.Internal.Format.GroupStartData Micros oft.PowerShell.Commands.Internal.Format.FormatEntryData Microsoft.PowerShell.Commands.Internal.Format.GroupEndData Microsoft.Powe rShell.Commands.Internal.Format.FormatEndData Finally I figured out how to produce what I wanted: $sbOut = new-object System.Text.Stringbuilder [void]$sbOut.appendLine(($myinvocation|out-string)) $sbOut.ToString() MyCommand        : $sbOut = new-object System.Text.Stringbuilder                                    [void]$sbOut.appendLine(($myinvocation|out-string))                                      $sbOut.ToString()                    BoundParameters  : {} UnboundArguments : {} ScriptLineNumber : 0 OffsetInLine     : 0 HistoryId        : 13 ScriptName       : Line             : PositionMessage  : InvocationName   : PipelineLength   : 2 PipelinePosition : 1 ExpectingInput   : False CommandOrigin    : Runspace Note the [void] in front of the stringbuilder variable doesn't show the Capacity,MaxCapacity of the stringbuilder object.  The pipe to out-string makes the output a string. It's not pretty but it works.

    Read the article

  • Best way to make a shutdown hook?

    - by Binarus
    Since Ubuntu relies on upstart for some time now, I would like to use an upstart job to gracefully shutdown certain applications on system shutdown or reboot. It is essential that the system's shutdown or reboot is stalled until these applications are shut down. The applications will be started manually on occasion, and on system shutdown should automatically be ended by a script (which I already have). Since the applications can't be ended reliably without (nearly all) other services running, ending the applications has to be done before the rest of the shutdown begins. I think I can solve this by an upstart job which will be triggered on shutdown, but I am unsure which events I should use in which manner. So far, I have read the following (partly contradicting) statements: There is no general shutdown event in upstart Use a stanza like "start on starting shutdown" in the job definition Use a stanza like "start on runlevel [06S]" in the job definition Use a stanza like "start on starting runlevel [06S]" in the job definition Use a stanza like "start on stopping runlevel [!06S]" in the job definition From these recommendations, the following questions arise: Is there or is there not a general shutdown event in Ubuntu's upstart? What is the recommended way to implement a "shutdown hook"? When are the events runlevel [x] triggered; is this when having entered the runlevel or when entering the runlevel? Can we use something like "start on starting runlevel [x]" or "start on stopping runlevel [x]"? What would be the best solution for my problem? Thank you very much

    Read the article

< Previous Page | 654 655 656 657 658 659 660 661 662 663 664 665  | Next Page >