Search Results

Search found 23404 results on 937 pages for 'script compression'.

Page 655/937 | < Previous Page | 651 652 653 654 655 656 657 658 659 660 661 662  | Next Page >

  • How to place SuperFetch cache on an SSD?

    - by Ian Boyd
    I'm thinking of adding a solid state drive (SSD) to my existing Windows 7 installation. I know I can (and should) move my paging file to the SSD: Should the pagefile be placed on SSDs? Yes. Most pagefile operations are small random reads or larger sequential writes, both of which are types of operations that SSDs handle well. In looking at telemetry data from thousands of traces and focusing on pagefile reads and writes, we find that Pagefile.sys reads outnumber pagefile.sys writes by about 40 to 1, Pagefile.sys read sizes are typically quite small, with 67% less than or equal to 4 KB, and 88% less than 16 KB. Pagefile.sys writes are relatively large, with 62% greater than or equal to 128 KB and 45% being exactly 1 MB in size. In fact, given typical pagefile reference patterns and the favorable performance characteristics SSDs have on those patterns, there are few files better than the pagefile to place on an SSD. What I don't know is if I even can put a SuperFetch cache (i.e. ReadyBoost cache) on the solid state drive. I want to get the benefit of Windows being able to cache gigabytes of frequently accessed data on a relativly small (e.g. 30GB) solid state drive. This is exactly what SuperFetch+ReadyBoost (or SuperFetch+ReadyDrive) was designed for. Will Windows offer (or let) me place a ReadyBoost cache on a solid state flash drive connected via SATA? A problem with the ReadyBoost cache over the ReadyDrive cache is that the ReadyBoost cache does not survive between reboots. The cache is encrypted with a per-session key, making its existing contents unusable during boot and SuperFetch pre-fetching during login. Update One I know that Windows Vista limited you to only one ReadyBoost.sfcache file (I do not know if Windows 7 removed that limitation): Q: Can use use multiple devices for EMDs? A: Nope. We've limited Vista to one ReadyBoost per machine Q: Why just one device? A: Time and quality. Since this is the first revision of the feature, we decided to focus on making the single device exceptional, without the difficulties of managing multiple caches. We like the idea, though, and it's under consideration for future versions. I also know that the 4GB limit on the cache file was a limitation of the FAT filesystem used on most USB sticks - an SSD drive would be formatted with NTFS: Q: What's the largest amount of flash that I can use for ReadyBoost? A: You can use up to 4GB of flash for ReadyBoost (which turns out to be 8GB of cache w/ the compression) Q: Why can't I use more than 4GB of flash? A: The FAT32 filesystem limits our ReadyBoost.sfcache file to 4GB Can a ReadyBoost cache on an NTFS volume be larger than 4GB? Update Two The ReadyBoost cache is encrypted with a per-boot session key. This means that the cache has to be re-built after each boot, and cannot be used to help speed boot times, or latency from login to usable. Windows ReadyDrive technology takes advantage of non-volatile (NV) memory (i.e. flash) that is incorporated with some hybrid hard drives. This flash cache can be used to help Windows boot, or resume from hibernate faster. Will Windows 7 use an internal SSD drive as a ReadyBoost/*ReadyDrive*/SuperFetch cache? Is it possible to make Windows store a SuperFetch cache (i.e. ReadyBoost) on a non-removable SSD? Is it possible to not encrypt the ReadyBoost cache, and if so will Windows 7 use the cache at boot time? See also SuperUser.com: ReadyBoost + SSD = ? Windows 7 - ReadyBoost & SSD drives? Support and Q&A for Solid-State Drives Using SDD as a cache for HDD, is there a solution? Performance increase using SSD for paging/fetch/cache or ReadyBoost? (Win7) Windows 7 To Boost SSD Performance How to Disable Nonvolatile Caching

    Read the article

  • Httpd restart "Address already in use" error

    - by mtndesign
    I have an .rpm, which I created. In its %post part, I do some stuff, and in the end of this script, i call service httpd restart. It gives the following error: + service httpd restart Stopping httpd: [FAILED] Starting httpd: (98)Address already in use: make_sock: could not bind to address [::]:81 (98)Address already in use: make_sock: could not bind to address 0.0.0.0:81 no listening sockets available, shutting down Unable to open logs [FAILED] I got this from the rpm verbose installing (-vv). So I know its about httpd restart itself, nothing else. The according to netstat only one process (httpd) is listening on port 81. $ sudo netstat -nlp | grep 81 tcp 0 0 :::81 :::* LISTEN 29670/httpd I don't understand, why running http FAILS at stop, and FAILS again in start. Any ideas how to solve this?

    Read the article

  • Algorithm to figure out appointment times?

    - by Rachel
    I have a weird situation where a client would like a script that automatically sets up thousands of appointments over several days. The tricky part is the appointments are for a variety of US time zones, and I need to take the consumer's local time zone into account when generating appointment dates and times for each record. Appointment Rules: Appointments should be set from 8AM to 8PM Eastern Standard Time, with breaks from 12P-2P and 4P-6P. This leaves a total of 8 hours per day available for setting appointments. Appointments should be scheduled 5 minutes apart. 8 hours of 5-minute intervals means 96 appointments per day. There will be 5 users at a time handling appointments. 96 appointments per day multiplied by 5 users equals 480, so the maximum number of appointments that can be set per day is 480. Now the tricky requirement: Appointments are restricted to 8am to 8pm in the consumer's local time zone. This means that the earliest time allowed for each appointment is different depending on the consumer's time zone: Eastern: 8A Central: 9A Mountain: 10A Pacific: 11A Alaska: 12P Hawaii or Undefined: 2P Arizona: 10A or 11A based on current Daylight Savings Time Assuming a data set can be several thousand records, and each record will contain a timezone value, is there an algorithm I could use to determine a Date and Time for every record that matches the rules above?

    Read the article

  • 10 tape technology features that make you go hmm.

    - by Karoly Vegh
    A week ago an Oracle/StorageTek Tape Specialist, Christian Vanden Balck, visited Vienna, and agreed to visit customers to do techtalks and update them about the technology boom going around tape. I had the privilege to attend some of his sessions and noted the information and features that took the customers by surprise and made them think. Allow me to share the top 10: I. StorageTek as a brand: StorageTek is one of he strongest names in the Tape field. The brand itself was valued so much by customers that even after Sun Microsystems acquiring StorageTek and the Oracle acquiring Sun the brand lives on with all the Oracle tapelibraries are officially branded StorageTek.See http://www.oracle.com/us/products/servers-storage/storage/tape-storage/overview/index.html II. Disk information density limitations: Disk technology struggles with information density. You haven't seen the disk sizes exploding lately, have you? That's partly because there are physical limits on a disk platter. The size is given, the number of platters is limited, they just can't grow, and are running out of physical area to write to. Now, in a T10000C tape cartridge we have over 1000m long tape. There you go, you have got your physical space and don't need to stuff all that data crammed together. You can write in a reliable pattern, and have space to grow too. III. Oracle has a market share of 62% worldwide in recording head manufacturing. That's right. If you are running LTO drives, with a good chance you rely on StorageTek production. That's two out of three LTO recording heads produced worldwide.  IV. You can store 1 Exabyte data in a single tape library. Yes, an Exabyte. That is 1000 Petabytes. Or, a million Terabytes. A thousand million GigaBytes. You can store that in a stacked StorageTek SL8500 tapelibrary. In one SL8500 you can put 10.000 T10000C cartridges, that store 10TB data (compressed). You can stack 10 of these SL8500s together. Boom. 1000.000 TB.(n.b.: stacking means interconnecting the libraries. Yes, cartridges are moved between the stacked libraries automatically.)  V. EMC: 'Tape doesn't suck after all. We moved on.': Do you remember the infamous 'Tape sucks, move on' Datadomain slogan? Of course they had to put it that way, having only had disk products. But here's a fun fact: on the EMCWorld 2012 there was a major presence of a Tape-tech company - EMC, in a sudden burst of sanity is embracing tape again. VI. The miraculous T10000C: Oracle StorageTek has developed an enterprise-grade tapedrive and cartridge, the T10000C. With awesome numbers: The Cartridge: Native 5TB capacity, 10TB with compression Over a kilometer long tape within the cartridge. And it's locked when unmounted, no rattling of your data.  Replaced the metalparticles datalayer with BaFe (bariumferrite) - metalparticles lose around 7% of magnetism within 30 days. BaFe does not. Yes we employ solid-state physicists doing R&D on demagnetisation in our labs. Can be partitioned, storage tiering within the cartridge!  The Drive: 2GB Cache Encryption implemented in HW - no performance hit 252 MB/s native sustained data rate, beats disk technology by far. Not to mention peak throughput.  Leading the tape while never touching the data side of it, protecting your data physically too Data integritiy checking (CRC recalculation) on tape within the drive without having to read it back to the server reordering data from tape-order, delivering it back in application-order  writing 32 tracks at once, reading them back for CRC check at once VII. You only use 20% of your data on a regular basis. The rest 80% is just lying around for years. On continuously spinning disks. Doubly consuming energy (power+cooling), blocking diskstorage capacity. There is a solution called SAM (Storage Archive Manager) that provides you a filesystem unifying disk and tape, moving data on-demand and for clients transparently between the different storage tiers. You can share these filesystems with NFS or CIFS for clients, and enjoy the low TCO of tape. Tapes don't spin. They sit quietly in their slots, storing 10TB data, using no energy, producing no heat, automounted when a client accesses their data.See: http://www.oracle.com/us/products/servers-storage/storage/storage-software/storage-archive-manager/overview/index.html VIII. HW supported for three decades: Did you know that the original PowderHorn library was released in '87 and has been only discontinued in 2010? That is over two decades of supported operation. Tape libraries are - just like the data carrying on tapecartridges - built for longevity. Oh, and the T10000C cartridge has 30-year archival life for long-term retention.  IX. Tape is easy to manage: Have you heard of Tape Storage Analytics? It is a central graphical tool to summarize, monitor, analyze dataflow, health and performance of drives and libraries, see: http://www.oracle.com/us/products/servers-storage/storage/tape-storage/tape-analytics/overview/index.html X. The next generation: The T10000B drives were able to reuse the T10000A cartridges and write on them even more data. On the same cartridges. We call this investment protection, and this is very important for Oracle for the future too. We usually support two generations of cartridges together. The current drive is a T10000C. (...I know I promised to enlist 10, but I got still two more I really want to mention. Allow me to work around the problem: ) X++. The TallBots, the robots moving around the cartridges in the StorageTek library from tapeslots to the drives are cableless. Cables, belts, chains running to moving parts in a library cause maintenance downtimes. So StorageTek eliminated them. The TallBots get power, commands, even firmwareupgrades through the rails they are running on. Also, the TallBots don't just hook'n'pull the tapes out of their slots, they actually grip'n'lift them out. No friction, no scratches, no zillion little plastic particles floating around in the library, in the drives, on your data. (X++)++: Tape beats SSDs and Disks. In terms of throughput (252 MB/s), in terms of TCO: disks cause around 290x more power and cooling, in terms of capacity: 10TB on a single media and soon more.  So... do you need to store large amounts of data? Are you legally bound to archive it for dozens of years? Would you benefit from automatic storage tiering? Have you got large mediachunks to be streamed at times? Have you got power and cooling issues in the growing datacenters? Do you find EMC's 180° turn of tape attitude interesting, but appreciate it at the same time? With all that, you aren't alone. The most data on this planet is stored on tape. Tape is coming. Big time.

    Read the article

  • Getting live traffic/visitor analytics when using a reverse proxy

    - by jotto
    I'm in process of implementing Varnish as a reverse proxy for a Ruby on Rails app and I'm using Google Analytics (JS/client side script to record visitor data) but it's several hours delayed so its useless for knowing what's going on now. I need at a glance live data that includes referring traffic and what current req/sec is. Right now I am using a simple Rack middleware application to do the live stats (gist.github.com/235745) but if the majority of traffic hits Varnish, Rack will never be hit so this won't work. The closest solution I've found so far is http://www.reinvigorate.net/ but it's in beta (there are also no implementation details on their front page). Does Varnish have traffic logs that I can custom format to match my Apache logs so I can combine them, or will I have to roll my own JS implementation like GA that shows the data in real time?

    Read the article

  • Is it possible to re-lock a bitlocker drive?

    - by Sean Edwards
    I'm running a partition with bitlocker on a Windows 7 Ultimate machine, which contains secure data that I have to recover infrequently. Unlocking it to access the data is obviously no problem, but is there a way to re-lock the partition when I'm done? The best I've found so far is this: http://social.technet.microsoft.com/Forums/en-US/w7itprosecurity/thread/41607938-7452-440d-8253-67fe8657bc0f Currently I have a .bat script on that drive that I can run as administrator, and that re-locks the drive, but it feels like kind of a hackish solution. Does anyone have anything better? Any idea when Microsoft might release a fix for this?

    Read the article

  • windows php curl install : recommend a good site?

    - by phill
    So I'm struggling to get php curl installed on my windows xp professional machine and I've probably tried 5 different sites which either dont' work or refers to missing file references like the ca certificates and such. I'm looking to write a php script which logs into a site ssl, captures the page data using regex and emailing it to me. Before I can get there, I need ssl curl. I was wondering if someone can recommend a better site or tutorial which effectively walks me through that step by step. thanks in advance.

    Read the article

  • Execute a SSIS package in Sync or Async mode from SQL Server 2012

    - by Davide Mauri
    Today I had to schedule a package stored in the shiny new SSIS Catalog store that can be enabled with SQL Server 2012. (http://msdn.microsoft.com/en-us/library/hh479588(v=SQL.110).aspx) Once your packages are stored here, they will be executed using the new stored procedures created for this purpose. This is the script that will get executed if you try to execute your packages right from management studio or through a SQL Server Agent job, will be similar to the following: Declare @execution_id bigint EXEC [SSISDB].[catalog].[create_execution] @package_name='my_package.dtsx', @execution_id=@execution_id OUTPUT, @folder_name=N'BI', @project_name=N'DWH', @use32bitruntime=False, @reference_id=Null Select @execution_id DECLARE @var0 smallint = 1 EXEC [SSISDB].[catalog].[set_execution_parameter_value] @execution_id,  @object_type=50, @parameter_name=N'LOGGING_LEVEL', @parameter_value=@var0 DECLARE @var1 bit = 0 EXEC [SSISDB].[catalog].[set_execution_parameter_value] @execution_id,  @object_type=50, @parameter_name=N'DUMP_ON_ERROR', @parameter_value=@var1 EXEC [SSISDB].[catalog].[start_execution] @execution_id GO The problem here is that the procedure will simply start the execution of the package and will return as soon as the package as been started…thus giving you the opportunity to execute packages asynchrously from your T-SQL code. This is just *great*, but what happens if I what to execute a package and WAIT for it to finish (and thus having a synchronous execution of it)? You have to be sure that you add the “SYNCHRONIZED” parameter to the package execution. Before the start_execution procedure: exec [SSISDB].[catalog].[set_execution_parameter_value] @execution_id,  @object_type=50, @parameter_name=N'SYNCHRONIZED', @parameter_value=1 And that’s it . PS From the RC0, the SYNCHRONIZED parameter is automatically added each time you schedule a package execution through the SQL Server Agent. If you’re using an external scheduler, just keep this post in mind .

    Read the article

  • How to use Binary Log file for Auditing and Replicating in MySQL?

    - by Pranav
    How to use Binary Log file for Auditing in MySQL? I want to track the change in a DB using Binary Log so that I can replicate these changes to other DB please do not give me hyperlinks for MySQL website. please direct me to find the solution I have looked for auditing options and created a script using Triggers for that, but due toi the Joomla DB structure it did'nt worked for me, hence I have to move on to Binary Log file concept now i am stucked in initiating the concept as I am not getting the concept of making the server master/slave, so can any body guide me how to actually initiate it via PHP?

    Read the article

  • How to find the computer name a user logged on to

    - by V. Romanov
    Is there a tool or script or some other way of knowing what computer name a specific user is currently logged on to? Or even was logged on to? Say the user "HRDrone" is working on his machine whose hostname is "HRStation01". I, sitting at my sysadmin desk, only know that the username is "HRDrone". Any way i can find out that he is logged on to "HRStation01" without asking the user? AD event viewer? anything? Thanks!

    Read the article

  • PowerDNS, updating serial

    - by Roland
    I recently wrote script that automatically enteres new Sub Domain records into the PDS mySql database. Now if I enter the entry mannually using Zone Admin my sub domain works 100%. Now if I add it using a simply SQL insert string eg "insert into records (domain_id, name,type,content,ttl,prio) values("; it does not work, I got told that I need to update the SOA serial which I do but it just does not want to take effect. I do the following date(Ymd)."01" and this does not work, any suggestions will be greatly appreciated

    Read the article

  • DB API for shell scripting (any shell)

    - by foampile
    I am faced with some legacy shell scripts that run batch data processing jobs in Oracle using SQL+. For the most part, the data tier does not have to communicate back to the script with retrieved data to be passed for shell-level processing but in a few cases it does. The problem is, SQL+ is really meant to be an end user app and not an API that can communicate with other clients programmaticaly. That is why people have invented APIs such as DBD::DBI for Perl, JDBC for Java, ODBC etc. The way it is done is they invoke SQL+ and then parse the output, which is clearly designed for human eye consumption, using tools like sed and awk. The whole thing is at best a hack and very prone to bugs. Since this client is rather conservative with their technology, they don't want to scale their scripts up to Perl or Python where there are data access APIs. So I am wondering whether there are similar APIs for shell, e.g. K or bash. What I would like is if an API would return data in a 2-dimensional array or strings (for the lack of type setting) so that I can just read DB data like that. The way they do it now is akin to parsing regular web page HTML to get a single stock quote rather than cleanly calling a web service and be done with it. Anybody know of a product I can use? Thanks

    Read the article

  • Implementing a multilanguage AI contest platform

    - by Alejandro Piad
    This is a followup to this question. To sum: I'm implementing an AI contest site, where each user may submit several AI implementations for different games. Think about Google AI Challenge but instead of just having a big event once a year, I would like it more on a league fashion, with all virtual players playing with each other every some close period of time. I want to support as many programming languages as possible. I've seen that contest sites (like codeforces) ask you to submit a source code and interact through stdin and stdout. The first question is: what is the best way of supporting multiple languages? As I see it, I can either ask people to upload some binary/script, and interact either through stdin/*stdout*, or sockets, or the file system; or ask people to submit source code, and wrap it with whatever is necessary for the interaction. I would like to skip the need to compile the code by myself (in the server, I mean), but I am willing to do it if its the "best" choice. I need to comunicate virtual players with each other, or even better, with some intermediary arbiter. The second question is regarding security. If I'm going to be running user code in my server, I want to ensure strict security conditions, like no file system access, no networking, etc. Otherwise it would be a safe heaven for hackers. I will be implementing the engine/arbiter in .NET. I would like to support at least C#, C++, Java and Python for the user's implementations. I'm willing to write interfaces for each of these languages to simplify the user interaction with the system. Thanks in advance.

    Read the article

  • Is there a way to prevent output from backgrounded tasks from covering the command line in a shell?

    - by Chris Pick
    I would like to be able to run task(s) in the background of a shell and not have their output to stdout or stderr cover the command line at the bottom. Frequently I need to run other commands to interact with the background processes and would like to do so from the same shell without having to open up another terminal or using multiplexer to split the terminal like screen. Ideally there would be some setting that I just don't know about (I commonly use bash or ksh), but a new or different shell or a script would be fine by me. I'm open to any suggestions and appreciate any help, thanks.

    Read the article

  • Shortcut key to forward email to fixed email address in Postbox

    - by Jos v.d. Voort v.d. Kleij
    As an avid user of todo apps (currently Asana) is very much miss a way to easily forward an email from Postbox to my GTD app. Currently the workflow is: Click cmd/L to open the forward email window Type [email protected] in the To: address field Click the send button in the forward email window What I would like to have is an automator or applescript that does that for me. Ex: Highligh/select the mail I wish to convert to a task in Asana and the type a shortcut like ctrl/cmd/L to forward the mail to Asana. As most todo apps have custom email addresses you can use to convert an email into a task, the only thing that needs to be changed is the email address in the script. For Asana this would not be necessary because the email address for converting an email into a taks is the same for all Asana users.

    Read the article

  • Set up proxy for vpn server on ubuntu server 12.4

    - by Morteza Soltanabadiyan
    I have a vpn server with HTTPS, L2TP, OPENVPN, and PPTP. I want to set up a proxy on the server, so all connection that comes from vpn clients, they will use that. I created the following bash script file for it, but the proxy isn't working. gsettings set org.gnome.system.proxy mode 'manual' gsettings set org.gnome.system.proxy.http enabled true gsettings set org.gnome.system.proxy.http host 'cproxy.anadolu.edu.tr' gsettings set org.gnome.system.proxy.http port 8080 gsettings set org.gnome.system.proxy.http authentication-user 'admin' gsettings set org.gnome.system.proxy.http authentication-password 'admin' gsettings set org.gnome.system.proxy use-same-proxy true export http_proxy=http://admin:[email protected]:8080 export https_proxy=http://admin:[email protected]:8080 export HTTP_PROXY=http://admin:[email protected]:8080 export HTTPS_PROXY=http://admin:[email protected]:8080 What to do to make a global proxy for server and all vpn clients to use it automatically?

    Read the article

  • Adding new users

    - by user36651
    I have an FTP server that is running Fedora Core release 6 (Zod) the problem is I need to create new users and I have root access saved in WinSCP, so I can run useradd or adduser via the fake terminal, but every time I try to use passwd <username> it crashes on me and won't allow me to change or add a password. my questions are this: --Is there a place the adduser script stores the default passwords? or what is the default? --Is there another way I can set passwords for new users? I don't want to change the root pass because EVERYONE has root access and it's saved in WinSCP (I'm sure you see the problem here...) I want to create User accounts for each user instead of giving them all blatant root access. the goal here is to gradually migrate everyone over to their new account and then change the root p/w. Any suggestions would be greatly appreciated.

    Read the article

  • How to bind std::map to Lua with LuaBind

    - by MahanGM
    Is this possible in lua to achieve? player.scripts["movement"].properties["stat"] = "stand" print (player.scripts["movement"].properties["stat"]) I've done getter method in c++ with this approach: luabind::object FakeScript::getProp() { luabind::object obj = luabind::newtable(L); for(auto i = this->properties.begin(); i != this->properties.end(); i++) { obj[i->first] = i->second; } return obj; } But I'm stuck with setter. The first line in lua code which I'm trying to set value "stand" for key "stat" is not going to work and it keep redirecting me to the getter method. Setter method only works when I drop ["stat"] from properties. I can do something like this for setter in my script: player.scripts["movement"].properties = {stat = "stand"} But this isn't what I want because I have to go through my real keys in c++ to determine which key is placed in setter argument table value. This is my map in class: std::map<std::string, std::string> properties;

    Read the article

  • Adding FK Index to existing table in Merge Replication Topology

    - by Refracted Paladin
    I have a table that has grown quite large that we are replicating to about 120 subscribers. A FK on that table does not have an index and when I ran an Execution Plan on a query that was causing issues it had this to say -- /* Missing Index Details from CaseNotesTimeoutQuerys.sql - mylocal\sqlexpress.MATRIX (WWCARES\pschaller (54)) The Query Processor estimates that implementing the following index could improve the query cost by 99.5556%. */ /* USE [MATRIX] GO CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>] ON [dbo].[tblCaseNotes] ([PersonID]) GO */ I would like to add this but I am afraid it will FORCE a reinitialization. Can anyone verify or validate my concerns? Does it even work that way or would I need to run the script on each subscriber? Any insight would be appreciated.

    Read the article

  • Sql Server Prevent Saving Changes That Require Table to be Re-created

    When working with SQL Server Management studio, if you use the Design view of a table and attempt to make a change that will require the table to be dropped and re-added, you may receive an error message like this one: Saving changes is not permitted.  The changes you have made require the following tables to be dropped and re-created.  You have either made changes to a table that cant be re-created or enabled the option Prevent saving changes that require the table to be re-created. In truth, its quite likely that you didnt enable such an option, despite the error dialogs accusations, as it is enabled by default when you install SQL Management Studio.  You can learn more about the issue in the KB article, Error message when you try to save a table in SQL Server 2008: Saving changes is not permitted. Warning: As the above article states, it is not recommended that you turn off this option (at least not permanently), as it will ensure that you do not accidentally change the schema of a table such that data is lost.  Do so at your peril. The simplest way to bypass this error is to go into Option Designers and uncheck the option Prevent saving changes that require table re-creation.  See the screenshot below. The main reason why you will see this error is if you attempted to do any of the following to the table whose design you are saving: Change the Allow Nulls setting for a column Reorder columns Change any columns data type Add a new column The recommended workaround is to script out the changes to a SQL file and execute them by hand, or to simply write out your own T-SQL to make the changes. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Nodes inside Cisco VPN. Incoming SSH requests allowed. But can't initiate an outbound SSH.

    - by Douglas Peter
    I've a gateway-to-gateway VPN setup between my Linksys RV042 router and a Cisco VPN. I am able to SSH into any of the machine inside the VPN from my network. But none of the machines inside the VPN can initiate an SSH into my network. It seems they've blocked even all ping requests to my network gateway. This is the requirement: I have scripts that SSH into the machines inside the VPN and run a long mysql query. The query generates an output to a file. The time that these queries take is variable. So I have a loop in my machine that periodically SSHes into the VPN machine and checks if the query has finished, and pulls the generated file using SCP. I need to simplify it thus: The script will run at the machine inside the VPN, and when the query completes, it will SSH into my machine and pushes the generated file. Thanks for any ideas.

    Read the article

  • Scale a game object to Bounds

    - by Spikeh
    I'm trying to scale a lot of dynamically created game objects in Unity3D to the bounds of a sphere collider, based on the size of their current mesh. Each object has a different scale and mesh size. Some are bigger than the AABB of the collider, and some are smaller. Here's the script I've written so far: private void ScaleToCollider(GameObject objectToScale, SphereCollider sphere) { var currentScale = objectToScale.transform.localScale; var currentSize = objectToScale.GetMeshHierarchyBounds().size; var targetSize = (sphere.radius * 2); var newScale = new Vector3 { x = targetSize * currentScale.x / currentSize.x, y = targetSize * currentScale.y / currentSize.y, z = targetSize * currentScale.z / currentSize.z }; Debug.Log("{0} Current scale: {1}, targetSize: {2}, currentSize: {3}, newScale: {4}, currentScale.x: {5}, currentSize.x: {6}", objectToScale.name, currentScale, targetSize, currentSize, newScale, currentScale.x, currentSize.x); //DoorDevice_meshBase Current scale: (0.1, 4.0, 3.0), targetSize: 5, currentSize: (2.9, 4.0, 1.1), newScale: (0.2, 5.0, 13.4), currentScale.x: 0.125, currentSize.x: 2.869114 //RedControlPanelForAirlock_meshBase Current scale: (1.0, 1.0, 1.0), targetSize: 5, currentSize: (0.0, 0.3, 0.2), newScale: (147.1, 16.7, 25.0), currentScale.x: 1, currentSize.x: 0.03400017 objectToScale.transform.localScale = newScale; } And the supporting extension method: public static Bounds GetMeshHierarchyBounds(this GameObject go) { var bounds = new Bounds(); // Not used, but a struct needs to be instantiated if (go.renderer != null) { bounds = go.renderer.bounds; // Make sure the parent is included Debug.Log("Found parent bounds: " + bounds); //bounds.Encapsulate(go.renderer.bounds); } foreach (var c in go.GetComponentsInChildren<MeshRenderer>()) { Debug.Log("Found {0} bounds are {1}", c.name, c.bounds); if (bounds.size == Vector3.zero) { bounds = c.bounds; } else { bounds.Encapsulate(c.bounds); } } return bounds; } After the re-scale, there doesn't seem to be any consistency to the results - some objects with completely uniform scales (x,y,z) seem to resize correctly, but others don't :| Its one of those things I've been trying to fix for so long I've lost all grasp on any of the logic :| Any help would be appreciated!

    Read the article

  • RewriteCond in .htaccess file gives me bad flag delimiters

    - by Steven
    I'm upgrading a website and I use this .htaccess file to show maintenance page: #MAINTENANCE-PAGE REDIRECT RewriteEngine on RewriteCond %{REMOTE_ADDR} !^127\.0\.0\.0 # Bogus IP address for posting here RewriteCond %{REMOTE_ADDR} !^127\.0\.0\.0 # Bogus IP address for posting here RewriteCond %{REQUEST_URI} !^/maintenance\.html$ RewriteRule ^(.*)$ http://www.mysite.no/maintenance.html [R=307,L] This opens the maintenance page for all users except the two IP addresses I've added. They get an Internal Server Error. I've used thesame script on another site, and that worked fine. Looking at the error log, I see the following: /var/www/vhosts/mysite.no/httpdocs/.htaccess: RewriteCond: bad flag delimiters If I remove my .htaccess file, I can work with my site just fine. My site is hosted on a VPN using CentOS 5. How can I fix this problem?

    Read the article

  • POST data not being received

    - by Alexander
    I've got an iPhone App that is supposed to send POST data to my server to register the device in a MySQL database so we can send notifications etc... to it. It sends it's unique identifier, device name, token, and a few other small things like passwords and usernames as a POST request to our server. The problem is that sometimes the server doesn't receive the data. And by this I mean, its not just receiving blank values for the POST inputs but, its not receiving ANY post data at all. I am logging all POST inputs to my server into some log files and when the script that relies on the POST data from the device fails (detects no data) I notice that its because NO POST data was sent. Is this a problem on the server, like refusing data or something or does this have to be on the client's side? What could be causing this?

    Read the article

  • How do I install Red5 using apt-get? Getting sub-process error

    - by Dalen
    This is copy from question of some guy on other forum that never got satisfiably answered. I encountered the same error few days ago on Ubuntu 13.04 Desktop. It seems like Red5 is installed but it cannot be run for some reason. Can anyone explain what is going on here? Why should dpkg fail? I mean, this is checked repo, it should work fine. apt-get install red5-server Selecting previously deselected package red5-server. (Reading database ... 53491 files and directories currently installed.) Unpacking red5-server (from .../red5-server_0.9.1-4squeeze1_all.deb) ... Setting up red5-server (0.9.1-4squeeze1) ... Starting Flash streaming server : red5-server failed! invoke-rc.d: initscript red5-server, action "start" failed. dpkg: error processing red5-server (--configure): subprocess installed post-installation script returned error exit status 1 configured to not write apport reports Errors were encountered while processing: red5-server E: Sub-process /usr/bin/dpkg returned an error code (1) Logfile error.log in /usr/share/red5/log was completely empty. Other logs were not but according to them, there were no problems at all.

    Read the article

< Previous Page | 651 652 653 654 655 656 657 658 659 660 661 662  | Next Page >