Search Results

Search found 50874 results on 2035 pages for 'change script'.

Page 552/2035 | < Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >

  • Jquery calculator

    - by Nemanja
    I want to make calculator for summation. I have this jquery code: <script type="text/javascript"> $(document).ready(function() { $("#calculate").click(function() { if ($("#input").val() != '' && $("#input").val() != undefined) { $("#result").html("Result is: " + parseInt($("#input").val()) * 30 ); } else { $("#result").html("Please enter some value"); } }); }); </script> <div id="calculator"> <br /> <input type="text" style="left:20px;" id="input" /> <div id="result"></div> <input type="button" id="calculate" value="calculate" style="left:20px;" /> </div> I want something like this: I write number 5, then input + from keyboard and write another value. Then to click on summation button and to get value. Can anyone help me with this? Thank you very much!

    Read the article

  • How do i allow users to execute commands via ssh without allocating a psuedo-terminal

    - by Dani El
    I need to allow users to run a limited set of commands. But not to allow them to create interactive sessions. Just like GitHub does. If you try to ssh without a command it greetings you and close the session. I can acquire this by using ForceCommand some-script But getting in some-script i then need to eval user's input. Perhaps any other NoTTY-like option in sshd_config? --- UPDATE --- i'm looking for a pure SSH / Bash solution, not Perl/Python/etc. hacks.

    Read the article

  • How to get the IP Address for your Local Area Connection on Windows Server?

    - by Geo
    I want to create a batch or vbs file that will put together a url and executed. Part of that url needs to be the actual ip address of the machine. How I am able to get that IP address in a variable to include it on the script? EDIT 1: I found out that the command below will give me the IP Address, but still don't know how to get that value into a variable to use it in a script. c:\> wmic NICCONFIG WHERE IPEnabled=true GET IPAddress /format:csv Node,IPAddress IP-0AFB,{10.25.5.2}

    Read the article

  • Template standard controls for an entirely new look and feel

    - by T
    This is the  Ineta Live player without the O’Data Feed.  It is a good example of taking the plain Media Player provided with the Encoder install and re-templating it to make it your own.  It also has a custom scrub control that is added in.  I generally put my tempates in a separate resource file.  On this project, I discovered that I had to include the template at the document level because I needed the ability to attach some code behind to fire change state behaviors.  I could not use the blend xaml behaviors for change state inside the template because the template can’ determine the TargetObject. Version 1.01 – 6/10/09 (wow how did a week slip by)

    Read the article

  • Not able to track traffic on subdomain using Google Analytics

    - by Steven
    I'm trying to track traffic for my sub-domain, but it's not happening. This is how it's set up. My partner has a domain called sub1.partner.com. This domain points to partner1.mydomain.com. The idea is that users think they are browsing my partners website, when they are in fact browsing pages on my server. My tracking code looks like this: var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-xxxxxxxx-x']); _gaq.push(['_setDomainName', '.mysite.com']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); In Google analytics I've created a new account under my main account and called in partner1.mysite.com. On this account I have created a filter: Filter type: include Filter field: Host name Filter pattern: partner1.mysite.no Case sensetive: No What more can I try to track traffic on my subdomain? UPDATE Question 1 Is this line correct? _gaq.push(['_setDomainName', '.mysite.com']); Question 2 Is it correct that I have to add \ before any punctuations like so \. in filters?

    Read the article

  • How to avoid oscillation by async event based systems?

    - by inf3rno
    Imagine a system where there are data sources which need to be kept in sync. A simple example is model - view data binding by MVC. Now I intend to describe these kind of systems with data sources and hubs. Data sources are publishing and subscribing for events and hubs are relaying events to data sources. By handling an event a data source will change it state described in the event. By publishing an event the data source puts its current state to the event, so other data sources can use that information to change their state accordingly. The only problem with this system, that events can be reflected from the hub or from the other data sources, and that can put the system into an infinite oscillation (by async or infinite loop by sync). For example A -- data source B -- data source H -- hub A -> H -> A -- reflection from the hub A -> H -> B -> H -> A -- reflection from another data source By sync it is relatively easy to solve this issue. You can compare the current state with the event, and if they are equal, you don't change the state and raise the same event again. By async I could not find a solution yet. The state comparison does not work by async event handling because there is eventual consistency, and new events can be published in an inconsistent state causing the same oscillation. For example: A(*->x) -> H -> B(y->x) -- can go parallel with B(*->y) -> H -> A(x->y) -- so first A changes to x state while B changes to y state -- then B changes to x state while A changes to y state -- and so on for eternity... What do you think is there an algorithm to solve this problem? If there is a solution, is it possible to extend it to prevent oscillation caused by multiple hubs, multiple different events, etc... ? update: I don't think I can make this work without a lot of effort. I think this problem is just the same as we have by syncing multiple databases in a distributed system. So I think what I really need is constraints if I want to prevent this problem in an automatic way. What constraints do you suggest?

    Read the article

  • How to kill all screens that has been up longer then 3 weeks?

    - by Darkmage
    Im creating a script that i am executing every night at 03.00 that will kill all screens that has been running longer than 3 weeks. anyone done anything similar that can help? If you got a script or suggestion to a better method please help by posting :) I was thinking maybe somthing like this. First do a dump to textfile ps -U username -ef | grep SCREEN dump.txt then do a loop running through all lines of dump.txt with a regex and putting pid of the prosseses with STIME 3weeksago in a array. then do a kill loop on the array result.

    Read the article

  • "// ..." comments at end of code block after } - good or bad?

    - by gablin
    I've often seen such comments be used: function foo() { ... } // foo while (...) { ... } // while if (...) { ... } // if and sometimes even as far as if (condition) { ... } // if (condition) I've never understood this practice and thus never applied it. If your code is so long that you need to know what this ending } is then perhaps you should consider splitting it up into separate functions. Also, most developers tools are able to jump to the matching bracket. And finally the last is, for me, a clear violation to the DRY principle; if you change the condition you would have to remember to change the comment as well (or else it could get messy for the maintainer, or even for you). So why do people use this? Should we use it, or is it bad practice?

    Read the article

  • Convert Currencies Dynamically using PHP, Google and cURL [closed]

    - by LizO
    I want to be able to allow users to dynamically change the currency of the products prices in my webstore, right there on the page. For example, 300 USD will change to 221.61 EUR when the user selects Euros from a dropdown. I found a few sites with PHP code for a calculator/input format (user inputs value and converted currency is output.) http://www.chazzuka.com/blog/?p=104 http://www.pixel2life.com/publish/tutorials/1166/currency_conversion_in_php/page-3/ I was hoping someone could help me figure out how to modify the PHP script. Thanks in advance.

    Read the article

  • Bash: verify that process has stopped

    - by pfac
    I'm working on script meant to start/stop a set of services. For stopping, it has to terminate many processes which take a while and might hang. The script needs to verify that the process has indeed terminated, and send an email if such does not happen after a given period. This is what I have: pkill -f "stuff" for i in {1..30}; do VERIFICATIONS=$i if verification_command then echo "It's gone" break fi sleep 2 done if [ $VERIFICATIONS -ge 30 ]; then echo "failed to terminate" # send mail fi Is there a better way to do this?

    Read the article

  • Are separate business objects needed when persistent data can be stored in a usable format?

    - by Kylotan
    I have a system where data is stored in a persistent store and read by a server application. Some of this data is only ever seen by the server, but some of it is passed through unaltered to clients. So, there is a big temptation to persist data - whether whole rows/documents or individual fields/sub-documents - in the exact form that the client can use (eg. JSON), as this removes various layers of boilerplate, whether in the form of procedural SQL, an ORM, or any proxy structure which exists just to hold the values before having to re-encode them into a client-suitable form. This form can usually be used on the server too, though business logic may have to live outside of the object, On the other hand, this approach ends up leaking implementation details everywhere. 9 times out of 10 I'm happy just to read a JSON structure out of the DB and send it to the client, but 1 in every 10 times I have to know the details of that implicit structure (and be able to refactor access to it if the stored data ever changes). And this makes me think that maybe I should be pulling this data into separate business objects, so that business logic doesn't have to change when the data schema does. (Though you could argue this just moves the problem rather than solves it.) There is a complicating factor in that our data schema is constantly changing rapidly, to the point where we dropped our previous ORM/RDBMS system in favour of MongoDB and an implicit schema which was much easier to work with. So far I've not decided whether the rapid schema changes make me wish for separate business objects (so that server-side calculations need less refactoring, since all changes are restricted to the persistence layer) or for no separate business objects (because every change to the schema requires the business objects to change to stay in sync, even if the new sub-object or field is never used on the server except to pass verbatim to a client). So my question is whether it is sensible to store objects in the form they are usually going to be used, or if it's better to copy them into intermediate business objects to insulate both sides from each other (even when that isn't strictly necessary)? And I'd like to hear from anybody else who has had experience of a similar situation, perhaps choosing to persist XML or JSON instead of having an explicit schema which has to be assembled into a client format each time.

    Read the article

  • How to factorize code in Unreal Kismet (i.e. "Material Function"s for Kismet)

    - by Georges Dupéron
    In the Unreal Development Kit, when using the Material Editor, one can factorize frequently-used groups of nodes by creating a Material Function (content Browser ? right-click ? new matrial function, IIRC). When defining the behaviour of some actor in Kismet, one can easily have a dozen nodes involved. If I have many actors that share the same behaviour, then I'll copy-paste these nodes, and change the variables so they point to the other actors. This leads to inconsistencies (a modification in the behaviour of an actor isn't propagated in the copy-pasted nodes), complexity (you end up with hundreds of nodes), and generally useless effort. My question is : Can I create a "kismet function", just like a material function ? Note: I'd rather avoid using UnrealScript. I don't even know where to type UnrealScripts, don't know where the documentation is and more generally don't have enough time to invest in learning UnrealScript. This "kismet function" feature must be usable by graphists (with little programming knowledge). If a (simple) script suffices to add this feature in the Kismet editor, so that one can create several "functions" without using UnrealScript, then fine, but I don't really want to have to write a script each time I want to factorize a few nodes. Thanks for any information !

    Read the article

  • New! EBS CRM Service Request Templating Online

    - by Oracle_EBS
    In an effort to improve the user experience changes have been made to Service Request (SR) creation process using My Oracle Support (MOS). This change is now online for several high-use CRM products. We aimed to streamline the process by reducing the number of questions, making subsequent questions conditional on previous responses, reducing lists of problem categories, and recommending key documents/evidence which should be supplied to help the Support engineer progress the issue. The process is now divided into three steps: Problem - prompts for a summary of the issue, and what steps have to be performed to re-produce the issue More Information - users will see the biggest change, as they select the ‘Problem Type’, which then presents a series of suggested attachments to upload Severity/Contact - section records who to contact, by what means, and the degree of urgency for the issue. The products included are: · Incentive Compensation · Trade Management · Site Hub · Incentive Compensation Analytics for Oracle Data Integrator · TeleService · Install Base · Quoting · Sales · Field Service · Service Contracts

    Read the article

  • How do I allow full brightness control on Unity?

    - by ChrisL
    When I use the brightness changing buttons Unity changes the brightness by two levels each time to create a nice effect. However, I only have about 10 different brightness settings and so this effect has caused a reduction in brightness settings to 5. I change my brightness regularly to suit the lighting in the room I'm in, but am unable to do this so effectively now that the number of levels has been reduced. Is there any way of turning off this brightness effect and controlling my brightness setting more sensitively? Thank you for your time. EDIT: It may not be 2 levels it changes at a time but 3, I've worked this out as the brightness control panel allows you to set the brightness accurately and there seem to be a couple of settings that the button misses completely when I change the brightness with the buttons.

    Read the article

  • Postgres backup

    - by Abbass
    Hello, I have a Bacula script that does an automatic backup of a Postgres Database. The script makes two backups using (pg_dump) of the data base : The schema only and the data only. /usr/bin/pg_dump --format=c -s $dbname --file=$DUMPDIR/$dbname.schema.dump /usr/bin/pg_dump --format=c -a $dbname --file=$DUMPDIR/$dbname.data.dump The problem is that I can't figure out how to restore it with pg_restore. Do I need to create the database and the users before then restore the schema and finally the data. I did the following : pg_restore --format=c -s -C -d template1 xxx.schema.dump pg_restore --format=c -a -d xxx xxx.data.dump This first restore creates the database with emtpy tables but the second gives many error like this one : pg_restore: [archiver (db)] COPY failed: ERROR: insert or update on table "Table1" violates foreign key constraint "fkf6977a478dd41734" DETAIL: Key (contentid)=(1474566) is not present in table "Table23". Any ideas?

    Read the article

  • How far should an entity take care of its properties values by itself?

    - by Kharlos Dominguez
    Let's consider the following example of a class, which is an entity that I'm using through Entity Framework. - InvoiceHeader - BilledAmount (property, decimal) - PaidAmount (property, decimal) - Balance (property, decimal) I'm trying to find the best approach to keep Balance updated, based on the values of the two other properties (BilledAmount and PaidAmount). I'm torn between two practices here: Updating the balance amount every time BilledAmount and PaidAmount are updated (through their setters) Having a UpdateBalance() method that the callers would run on the object when appropriate. I am aware that I can just calculate the Balance in its getter. However, it isn't really possible because this is an entity field that needs to be saved back to the database, where it has an actual column, and where the calculated amount should be persisted to. My other worry about the automatically updating approach is that the calculated values might be a little bit different from what was originally saved to the database, due to rounding values (an older version of the software, was using floats, but now decimals). So, loading, let's say 2000 entities from the database could change their status and make the ORM believe that they have changed and be persisted back to the database the next time the SaveChanges() method is called on the context. It would trigger a mass of updates that I am not really interested in, or could cause problems, if the calculation methods changed (the entities fetched would lose their old values to be replaced by freshly recalculated ones, simply by being loaded). Then, let's take the example even further. Each invoice has some related invoice details, which also have BilledAmount, PaidAmount and Balance (I'm simplifying my actual business case for the sake of the example, so let's assume the customer can pay each item of the invoice separately rather than as a whole). If we consider the entity should take care of itself, any change of the child details should cause the Invoice totals to change as well. In a fully automated approach, a simple implementation would be looping through each detail of the invoice to recalculate the header totals, every time one the property changes. It probably would be fine for just a record, but if a lot of entities were fetched at once, it could create a significant overhead, as it would perform this process every time a new invoice detail record is fetched. Possibly worse, if the details are not already loaded, it could cause the ORM to lazy-load them, just to recalculate the balances. So far, I went with the Update() method-way, mainly for the reasons I explained above, but I wonder if it was right. I'm noticing I have to keep calling these methods quite often and at different places in my code and it is potential source of bugs. It also has a detrimental effect on data-binding because when the properties of the detail or header changes, the other properties are left out of date and the method has no way to be called. What is the recommended approach in this case?

    Read the article

  • How to make a iOS plugin for Unity3d

    - by DannoEterno
    I've passed last 2 days reading articles and book for understand how can i make a plugin for iOS in Unity. Basically i need just a demo for understand how it work. For now i've tried to make this process (with really poor luck): I've started a new project in Unity and writed a simple script using UnityEngine; using System.Collections; using System; using System.Runtime.InteropServices; public class CallPlugin : MonoBehaviour { [DllImport ("__Internal")] private static extern int test(); void Start () { Debug.Log(test()); } } Then i've created a project in Xcode with this simple script: extern "C"{ int test() { int che = 5; return che; } } Then i've tried: to put the .mm and .h in the Assets/Plugins/iOS = nothing to build the unity project and than add the .h and .mm in the Xcode project = nothing In Unity i will always get the EntryPointNotFoundException, so unity see the file but is unable to reach the method. The problem is... how?! :) Maybe i miss something or i've done something wrong? Thanks a lot for every help that you can give me :)

    Read the article

  • Wireless switch on Dell XT2 - strange behaviour of rfkill

    - by DyP
    I have an Dell Latitude XT2 using an Intel WLAN card (lspci lists it as "Intel Corporation Ultimate N WiFi Link 5300") running Lubuntu 12.04 with recent updates. The laptop has a hardware WLAN switch. I have problems activating the WLAN when booting with the hardware switch set to "off". The situation is a bit confusing, unfortunately. rfkill lists two WLAN devices (though lspci only shows the Intel one). This is the situation when booting with the hardware switch set to "Off": 0: dell-wifi: Wireless LAN Soft blocked: yes Hard blocked: yes 1: dell-bluetooth: Bluetooth Soft blocked: yes Hard blocked: yes 2: phy0: Wireless LAN Soft blocked: yes Hard blocked: yes From some tests, I conclude WLAN is only activated when both, the dell-wifi and phy0, are unblocked by soft- and hardware. But I can only unblock dell-wifi after the hardware switch is set to "on". Procedure right from boot with hardware switch set to "Off": Soft-unblocking phy0 works as expected. Could be done by start-up script. sudo rfkill unblock 0: nothing happens. Soft block of dell-wifi not removed. Set the hardware switch to "on": phy0 gets its hard block removed. Still no WLAN. sudo rfkill unblock 0: both the soft and hard lock of dell-wifi are removed. WLAN is now active and works. sudo rfkill block 0: only adds the soft block as expected. WLAN goes off again. So, in order to activate WLAN, I have to use the hardware switch and afterwards (manually) run a script - that's a bit inconvenient. Does someone know a better solution? Maybe a daemon could help that listens to rfkill events to unblock dell-wifi after I have set the hardware switch to "on"? (sounds like another workaround) When booting with the hardware switch set to "On", nothing is blocked neither hard nor soft.

    Read the article

  • How to generate SPMetal for a specific list (OOTB: like tasks or contacts) with custom columns

    - by KunaalKapoor
    SPMetal is used to make use of LINQ on a list in SharePoint 2010. By default when you generate SPMetal on a site you will get a code generated file for most of the lists and probably more. Here is a MSDN link for some info on SPMetal.http://msdn.microsoft.com/en-us/library/ee538255(office.14).aspxBut what if you want only to generate the code for one list?Well it is quite simple once you figure it out. You need to add an xml file to override the default settings of SPMetal and specify it in the /parameters option. I will show you how to do this.First create a Folder that will contain two files (GenerateSPMetalCode.bat and SPMetal.xml).Below is the content of the files:GenerateSPMetalCode.bat "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\BIN\SPMetal" /web:http://YourServer /code:OutPutFileName.cs /language:csharp /parameters:SPMetal.xml pause SPMetal.xml <?xml version="1.0" encoding="utf-8"?> <Web AccessModifier="Internal" xmlns="http://schemas.microsoft.com/SharePoint/2009/spmetal"> <List Name="ListName"> <ContentType Name="ContentTypeName" Class="GeneratedClassName" /> </List> <ExcludeOtherLists></ExcludeOtherLists> </Web> You will have to change some of the text in the files so that it will be specific to your SharePoint Server Setup. In the bat file you will have to change http://YourServer to the url of the web where your list is. In the SPMetal.xml file you need to change ListName to the name of your list and the ContentTypeName to the name of the content type you want to extract. The GeneratedClassName can be anything but perhaps you should rename it to something more sensible.Adding the following line: '<List Name="ListName"><ContentType Name="ContentTypeName" Class="GeneratedClassName" /> </List>'  makes sure that any custom columns added to an OOTB list like contacts or tasks are also generated, which are missed out in a regular generation.So now when you run it the SPMetal command will read the SPMetal.xml list and override its commands. ExcludeOtherLists element makes it so that only the code for the lists you specify will be generated. For some reason I got an error if I had this element above the List element.You sould now have a code file called OutPutFileName.cs that has been generated. You can now put this in your SharePoint project for use with your LINQ queries against that list.I will soon write a LINQ example that uses the generated class. UPDATE: Add the /namespace parameter to add a namespace to the generated code. "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\BIN\SPMetal" /web:http://YourServer /namespace:MySPMetalNameSpace /code:OutPutFileName.cs /language:csharp /parameters:SPMetal.xml

    Read the article

  • I have Ubuntu 11.10 and for no apparent reason my sidebar suddenly will not hide, how can I reverse this?

    - by cathy
    I have Ubuntu 11.10 and for no apparent reason my sidebar suddenly will not hide, I did not go into system settings or anything like that. I googled this, but everything I found had me downloading something to be able to fix it. However, as it changed without me downloading anything, I'm hoping it's possible to change it back without downloading anything. I read somewhere that I should go into "System setting", "appearance", "behaviour" and change it to "auto-hide", but I did not have a "behaviour" tab in "appearance". I would normally be able to just deal with it, but it is preventing me from being able to click the "back" button when I am in a browser. Does anyone

    Read the article

  • How to bring application to front every 15 minutes on Mac OS X Lion?

    - by johnnyb10
    I have a free or cheap little app called Desktop Task Timer LE that I've been using to track my time as I work on various projects. I'd like to have it pop up as the foreground app every 15 minutes to prevent me from forgetting to stop/change the timer after I've moved on to a different task. I know I can have the app launch using a script in Automator or AppleScript, but I don't know how to have that script fire off every 15 minutes. I've read about using Launchd and iCal, but I'm still not sure how to do it. (Actually, iCal is probably simple, but I'd like to avoid using it for this.) Any ideas? Also, a further feature would be to have it pop up after 5 (or x) minutes of inactivity on the computer. Not sure if this would work for my needs, but I'd like to test it if possible.

    Read the article

  • Secure copy uucp style

    - by Alexander Janssen
    I often have the case that I have to make a lot of hops to the remote host, just because there is no direct routing between my client and the remote host. When I need to copy files from a remote host two or more hops away, I always have to: client$ ssh host1 host1$ ssh host2 host2$ scp host3:/myfile . host2$ exit host1$ scp host2:myfile . host1$ exit client$ scp host1:myfile . Back when uucp still was being used this would be as simple as a uucp host1!host2!host3 /myfile . I know that there's uucp over ssh, but unfortunately I don't have the proper privileges on those machines to set it up. Also, I'm not sure if I really want to fiddle around with customer's machines. Does anyone know of a method doing this tasks without the need to setup a lot of tunnels or deploying new software to remote hosts? Maybe some kind of recursive script which clones itself to all the remote hosts, doing the hard work for me? Assume that authentication takes place with public keys and that all hosts do SSH Agent Forwarding. Edit: I'm not looking for a way to automatically forwarding my interactive sesssion to the nexthop host. I want a solution to copy files bangpath-style using scp via multiple hops without the need to install uucp on any of those machines. I don't have the (legal) rights or the privileges to make permanent changes to the ssh-config. Also, I'm sharing this username and hosts with a lot of other people. I'm willing to hack up my own script, but I wanted to know if anyone knows something which already does it. Minimum-invasive changes to hosts on the bangpath, simple invocation from the client. Edit 2: To give you an impression of how it's properly been done in interactive sessions, have a look at the GXPC clustershell. This is basically a Python-script, which spwans itself over to all remote hosts which have connectivity and where your ssh-key is installed. The great thing about it is, that you can tell "I can reach HostC via HostB via HostA." It just works. I want to have this for scp.

    Read the article

  • Is it possible to extend a 504 timeout in nginx on a per location basis

    - by codecowboy
    Is it possible to set timeout directives within a location block to prevent nginx returning a 504 from a long running PHP script (PHP-FPM? location /myurlsegment/ { client_body_timeout 1000000; send_timeout 1000000; fastcgi_read_timeout 1000000; } This has no effect when making a request to example.com/myurlsegment. The timeout occurs after approximately 60 seconds. PHP is configured to allow the script to run until completion (set_time_limit(0)) I don't want to set a global timeout for all scripts.

    Read the article

< Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >