Search Results

Search found 54052 results on 2163 pages for 'run configuration'.

Page 395/2163 | < Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >

  • Release build vs nightly build

    - by Tuomas Hietanen
    Hi! A typical solution is to have a CI (Continuous Integration) build running on a build server: It will analyze the source code, make build (in debug) and run tests, measure test coverage, etc. Now, another build type usually known is "Nightly build": do slow stuff like create code documents, make a setup package, deploy to test environment, and run automatic (smoke or acceptance) tests against the test environment, etc. Now, the question: Is it better to have a third separate "Release build" as release build? Or do "Nightly build" in release mode and use it as a release? What are you using in your company? (The release build should also add some kind of tag to source control of potential product version.)

    Read the article

  • Copy all bridge traffic to a specific interface

    - by Azendale
    I have a bridge/switch set up an a machine that has multiple ports. Occasionally, I have a vm running through virtualbox, and I'll have it use a virtual adapter and then I add the adapter to the bridge. I have heard that some switches can copy all the traffic they see to a specific port on the bridge, usually for network monitoring. I would like to be able to run some windows based network tools. I do not want to run Windows on the actual hardware, because it would be lots of work to duplicate my setup in windows, so I was thinking if I can copy all traffic to a port, I can send it to a VM with windows. How can I set this up? I think this might be ebtables area, but I don't know ebtables well enough to know for sure, and it always seems like (from my understanding of ebtables) ebtables does something with the traffic (drop, accept, etc), but never copies it.

    Read the article

  • Tweaking log4net Settings Programmatically

    - by PSteele
    A few months ago, I had to dynamically add a log4net appender at runtime.  Now I find myself in another log4net situation.  I need to modify the configuration of my appenders at runtime. My client requires all files generated by our applications to be saved to a specific location.  This location is determined at runtime.  Therefore, I want my FileAppenders to log their data to this specific location – but I won't know the location until runtime so I can't add it to the XML configuration file I'm using. No problem.  Bing is my new friend and returned a couple of hits.  I made a few tweaks to their LINQ queries and created a generic extension method for ILoggerRepository (just a hunch that I might want this functionality somewhere else in the future – sorry YAGNI fans): public static void ModifyAppenders<T>(this ILoggerRepository repository, Action<T> modify) where T:log4net.Appender.AppenderSkeleton { var appenders = from appender in log4net.LogManager.GetRepository().GetAppenders() where appender is T select appender as T;   foreach (var appender in appenders) { modify(appender); appender.ActivateOptions(); } } Now I can easily add the proper directory prefix to all of my FileAppenders at runtime: log4net.LogManager.GetRepository().ModifyAppenders<FileAppender>(a => { a.File = Path.Combine(settings.ConfigDirectory, Path.GetFileName(a.File)); }); Thanks beefycode and Wil Peck. Technorati Tags: .NET,log4net,LINQ

    Read the article

  • Coherence Management with EM Cloud Control 12c –demo for partners

    - by JuergenKress
    For access to the Oracle demo systems please visit OPN and talk to your Partner Expert We are pleased to announce the availability of the Coherence Management demo that showcases some of the key capabilities of Management Pack for Oracle Coherence and JVM Diagnostics (licensed under WLS Management Pack EE and Management Pack for NonOracle MW). This demo specifically focuses on some of the performance management and configuration management solutions for Oracle Coherence. The demo flow showcases the key enhancements made in Enterprise Manager 12c release which includes new customizable performance summary, cache data management and configuration management. Demo Highlights The demo showcases the following capabilities. Centralized monitoring for enterprise wide Coherence deployments Drill down diagnostics Customizable performance views Monitoring performance trends Monitoring Caches, Nodes, Services, etc Performance and Log Alerts Real-time Java Diagnostics and memory leak analysis Cache Data Management Lifecycle management Provisioning Coherence on a new machine Starting nodes on machine where Coherence is already running Killing a node process Demo Instructions Go to the DSS website for Oracle Partners. On the Standard Demo Launchpad page, under the “Middleware Management” section, click on the link “EM Cloud Control 12c Coherence Management” (tagged as “NEW”). Specific demo launchpad page contains a link to the detailed demo script with instructions on how to show the demo. Read more on Community Events and post your comment here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Coherence,Coherence demo,DSS,CAF,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Coherence Management with EM Cloud Control 12c –demo for partners

    - by JuergenKress
    For access to the Oracle demo systems please visit OPN and talk to your Partner Expert We are pleased to announce the availability of the Coherence Management demo that showcases some of the key capabilities of Management Pack for Oracle Coherence and JVM Diagnostics (licensed under WLS Management Pack EE and Management Pack for NonOracle MW). This demo specifically focuses on some of the performance management and configuration management solutions for Oracle Coherence. The demo flow showcases the key enhancements made in Enterprise Manager 12c release which includes new customizable performance summary, cache data management and configuration management. Demo Highlights The demo showcases the following capabilities. Centralized monitoring for enterprise wide Coherence deployments Drill down diagnostics Customizable performance views Monitoring performance trends Monitoring Caches, Nodes, Services, etc Performance and Log Alerts Real-time Java Diagnostics and memory leak analysis Cache Data Management Lifecycle management Provisioning Coherence on a new machine Starting nodes on machine where Coherence is already running Killing a node process Demo Instructions Go to the DSS website for Oracle Partners. On the Standard Demo Launchpad page, under the “Middleware Management” section, click on the link “EM Cloud Control 12c Coherence Management” (tagged as “NEW”). Specific demo launchpad page contains a link to the detailed demo script with instructions on how to show the demo. Read more on Community Events and post your comment here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Coherence,Coherence demo,DSS,CAF,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Puppet: Getting Started On Windows

    - by Robz / Fervent Coder
    Originally posted on: http://geekswithblogs.net/robz/archive/2014/08/07/puppet-getting-started-on-windows.aspxNow that we’ve talked a little about Puppet. Let’s see how easy it is to get started. Install Puppet Let’s get Puppet Installed. There are two ways to do that: With Chocolatey: Open an administrative/elevated command shell and type: choco install puppet Download and install Puppet manually - http://puppetlabs.com/misc/download-options Run Puppet Let’s make pasting into a console window work with Control + V (like it should): choco install wincommandpaste If you have a cmd.exe command shell open, (and chocolatey installed) type: RefreshEnv The previous command will refresh your environment variables, ala Chocolatey v0.9.8.24+. If you were running PowerShell, there isn’t yet a refreshenv for you (one is coming though!). If you have to restart your CLI (command line interface) session or you installed Puppet manually open an administrative/elevated command shell and type: puppet resource user Output should look similar to a few of these: user { 'Administrator': ensure => 'present', comment => 'Built-in account for administering the computer/domain', groups => ['Administrators'], uid => 'S-1-5-21-some-numbers-yo-500', } Let's create a user: puppet apply -e "user {'bobbytables_123': ensure => present, groups => ['Users'], }" Relevant output should look like: Notice: /Stage[main]/Main/User[bobbytables_123]/ensure: created Run the 'puppet resource user' command again. Note the user we created is there! Let’s clean up after ourselves and remove that user we just created: puppet apply -e "user {'bobbytables_123': ensure => absent, }" Relevant output should look like: Notice: /Stage[main]/Main/User[bobbytables_123]/ensure: removed Run the 'puppet resource user' command one last time. Note we just removed a user! Conclusion You just did some configuration management /system administration. Welcome to the new world of awesome! Puppet is super easy to get started with. This is a taste so you can start seeing the power of automation and where you can go with it. We haven’t talked about resources, manifests (scripts), best practices and all of that yet. Next we are going to start to get into more extensive things with Puppet. Next time we’ll walk through getting a Vagrant environment up and running. That way we can do some crazier stuff and when we are done, we can just clean it up quickly.

    Read the article

  • launching executable from /usr/local/bin needs access to local file

    - by kedmond
    I have an executable, called "octane", that I want to be able to launch globally. The executable requires access to a local binary, "octane.dat", in order to run. I placed the directory containing the executable and the binary in /opt/ as root and created a symbolic link of the executable in /usr/local/bin/. Now, if I type "octane" anywhere, it launches but throws up an error saying it won't run without the binary, "octane.dat". Octane will only launch if my current working directory is the same as the executable and binary, in /usr/local/bin. Any suggestions on how to fix this? Do I have to make that directory global using .bashrc? Thanks.

    Read the article

  • Howto configure openSuSE firewall to route local traffic to local ports

    - by Eduard Wirch
    I have openSUSE 11.3 installed. I'm using the openSUSE firewall configuration mechanism (/etc/sysconfig/SuSEfirewall2). I have a http server application running on port 8080. I want the http service to be accessible using port 80. I created a redirect rule usign: FW_REDIRECT="0/0,0/0,tcp,80,8080" This works fine for every request coming from external. But it doesn't for local requests. (example: wget http://myserver/) Is there a way how I can tell the firewall to redirect local requests addressed for 80 to port 8080? (using the SUSE firewall configuration file)

    Read the article

  • Reverse proxy 502 bad gateway

    - by Brian Graham
    I have setup a subdomain to proxy my plesk panel, but when saving pages I am getting 502 Bad Gateway error instead of a completion message. I am running CentOS 6. Here is my vhost.conf configuration for http://plesk.domain.tld/: RewriteEngine On RewriteCond %{SERVER_PORT} ^80$ RewriteRule $ https://plesk.domain.tld/ [R,L] Here is my vhost_ssl.conf configuration for https://plesk.domain.tld/: SSLProxyEngine On <Location /> ProxyPass https://localhost:8443/ ProxyPassReverse https://localhost:8443/ </Location> I have more than enough (and I have even checked) RAM, CPU and HDD. There are no spikes. As well, the posted information does save, it just errors when trying to show me a "This information has been saved." green/red block. Here is the relevent error from /var/log/nginx/error.log (IP/Host Filtered): 2014/05/29 02:42:41 [error] 8046#0: *402 upstream prematurely closed connection while reading response header from upstream, client: 173.238.XX.XX, server: plesk.domain.tld, request: "POST /smb/web/edit HTTP/1.1", upstream: "https://198.100.XX.XX:7081/smb/web/edit", host: "plesk.domain.tld", referrer: "https://plesk.domain.tld/smb/web/edit"

    Read the article

  • Issue 57 - DotNetNuke Gallery Module and OWS Skin Objects

    June 2010 Welcome to Issue 57 of DNN Creative Magazine In this issue we show you how to use the DotNetNuke Core Gallery Module. The Gallery module allows you to upload files and present them within albums. You can upload images as well as media files such as music and video files. The Gallery module has many features available such as multiple albums, bulk upload, categorization, slideshow, display templates, voting, downloads, watermark and private gallery. This is a useful module for displaying images and media within your DotNetNuke portal with options for customizing the display to suit your exact requirements. We walk you through step by step how to install, use and fully configure the DotNetNuke Gallery module. Following this we continue the Open Web Studio tutorials, this month we demonstrate how to create a Skin Object from an OWS configuration. We show you how to create a menu and a feedback form using OWS and how to display those OWS applications as Skin Objects within a DotNetNuke skin. To finish, we continue the series of articles on DotNetMushroom Rapid Application Developer (RAD), where we demonstrate some of the new features available in the latest version of DNM RAD, these include: Creating a new data source, creating a linked table, creating a direct query and the new colour coding editor. This issue comes complete with 9 videos. Core Modules: DotNetNuke Gallery Module (7 videos - 57 mins) Module Development Series: How to Create a Skin Object from an OWS Configuration (2 videos - 18 mins) New Features in DNM 01.20.00 View issue 57 to download all of the videos in one zip file DNN Creative Magazine for DotNetNuke Web Designers Covering DotNetNuke module video reviews, video tutorials, mp3 interviews, resources and web design tips for working with DotNetNuke. In 57 issues we have created 587 videos!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • A human-friendlier Samba name mangling

    - by Alex
    Most of our computers run Ubuntu, but two of them dual-boot into Windows, and when we have guests over, they typically also run Windows computers. Thus, in addition to using NFS, our file server (Ubuntu server) also runs Samba. And since we use Ubuntu mostly, we like to take advantage of its advantages over Windows, such as being able to use the characters \:*?"<>| in a file name. The problem, of course, is that Windows doesn't accept those characters in file names, and so Samba has to translate the file name into something more acceptable. The way it does this, however, I find to be obnoxious. The file name Episode 182 - Exorcist 2: The Heretic.mp4 for instance turns into E4Q82R~Y.MP4. This is a terrible "correction". Is there a way to make Samba's mangling a little more friendly to humans? Is possible to "correct" it to something like Episode 182 - Exorcist 2_ The Heretic.mp4 instead, where the illegal characters are simply substituted?

    Read the article

  • Upgrading Beta to full version work without bugs? [closed]

    - by Nicky Bailuc
    Possible Duplicate: I installed an alpha or beta, am I up to date with the final release if I keep upgrading? When the Beta version of 13.4 comes out, I would like to install it and therefore put all my programs, files, and data on it. On the 18th when the original version of 13.4 comes out, will I be able to upgrade the beta into the original without any issues and successfully run it without bugs. I'm asking this because when i upgraded 12.4 to 12.10 it had a lot of glitches to it. Will the 13.4 run the same after upgrading as if I was to install the it directly as it is?

    Read the article

  • Removing RAID on a DELL power-edge server

    - by Simon Callan
    We have a Dell Power-edge T110 server, with either a PERC S100 or S300 raid controller (I suspect S100), and 2 x 500GB SATA hard discs. These discs are running in RAID-1 configuration, (either 1 or 2 virtual discs), and on top of this, we have the C and D drives. As we are running low on disc space, we are looking at dropping the RAID to get some disc space back. Is it possible to split the discs back into 2 seperate hards discs that the OS can see, without losing all the data on the drives? Even better, would it be possible to split the D: drive, leaving the OS C: drive in RAID1 configuration?

    Read the article

  • Howto configure openSuSE firewall to route local traffic to local ports

    - by Eduard Wirch
    I have openSUSE 11.3 installed. I'm using the openSUSE firewall configuration mechanism (/etc/sysconfig/SuSEfirewall2). I have a http server application running on port 8080. I want the http service to be accessible using port 80. I created a redirect rule usign: FW_REDIRECT="0/0,0/0,tcp,80,8080" This works fine for every request coming from external. But it doesn't for local requests. (example: wget http://myserver/) Is there a way how I can tell the firewall to redirect local requests addressed for 80 to port 8080? (using the SUSE firewall configuration file)

    Read the article

  • dell vostro 1000 broadcom wireless connection

    - by lorrenuy
    I have a problem with the hardware broadcom wifi. I press the hotkey fn+f2 to activate the hardware and this will not work. I'll look at the drivers but it appears to be installed. How can I solve this problem? Ubuntu is all new to me so if possible, give me a clear explanation. Now do I connect the lan cable. I use the Ubuntu 11.10 lawrence@lawrence-Vostro-1000:~$ sudo lshw -class network [sudo] password for lawrence: PCI (sysfs) *-network description: Network controller product: BCM4311 802.11b/g WLAN vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:05:00.0 version: 01 width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=b43-pci-bridge latency=0 resources: irq:18 memory:c0200000-c0203fff *-network description: Ethernet interface product: BCM4401-B0 100Base-TX vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:08:00.0 logical name: eth1 version: 02 serial: 00:1c:23:a2:b9:a9 size: 100Mbit/s capacity: 100Mbit/s width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list ethernet physical mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=b44 driverversion=2.0 duplex=full ip=192.168.1.18 latency=64 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:21 memory:c0300000-c0301fff lawrence@lawrence-Vostro-1000:~$ lawrence@lawrence-Vostro-1000:~$ rfkill list all 0: dell-wifi: Wireless LAN Soft blocked: yes Hard blocked: yes

    Read the article

  • Where is the "pysdm" package?

    - by John Boy
    I am new to Ubuntu. I have some old hardware lying around so I decided to build a backup/storage device. I am trying to follow this lifehacker article. It asks me to open a terminal and run sudo apt-get install pysdm. However, I keep getting Unable to locate package pysdm. Does anyone know where my pysdm is or where I can get one. I have run ubuntu from a usb key and have installed it on a hard drive and get the same message.

    Read the article

  • Double click executable file and nothing happens

    - by Ralf Tiede
    I'm trying to install a game for Linux called Myth 2. Autorun doesn't run when I insert the CD. When I double-click or right-click and the select "Open" on the Setup file, a box appears saying that it's an executable file, and what I want to do. I click on "Run", but nothing happens after that. I checked the permissions, and it allows it running the executable. How do I install this game? Please break down instructions as much as possible, I'm not used to using commands and Terminal. ;)

    Read the article

  • Clonezilla-SE with another DHCP Server in LAN

    - by aleroot
    I want to install Clonezilla-Server(192.168.1.100) in a network that already have a DHCP Server (dd-wrt with dnsmasq - 192.168.1.1). I've installed Clonezilla-SE on ubuntu Server 10.10, once installed and configured Clonezilla Server i've removed the DHCP-Server and set pxe server address in dnsmasq configuration on DHCP Server : dhcp-boot=pxelinux.0,,192.168.1.100 When i try to start from PXE a Computer in the network clonezilla start, but give me an error that the ipddress of the machine is not given by the clonezilla server and can't continue ... Someone has already tried to configure Clonezilla-SE in a similar enviroment? Is there some configuration on DRBL server of Clonezilla that i need to do ?

    Read the article

  • No 'Hardware' tab in audio and no profiles

    - by Gene
    If I run the 12.x ubuntu (latest May 2012) from the CD, I get full audio settings, and sound playing in speaker. Profiles let me change analog to digital in/out. Once I run install from the same CD onto the laptop HD, once it boots the first time, after selecting audio settings, there is no 'Hardware' tab and no way to change profiles. Worst part is the audio device is set to SPDIF so nothing comes out of the speakers. Very off how booting off the CD I can get analog audio, and installing to HD and booting seems to limit the profile to something useless. Laptop is a 5 year old Dell D820 with Nvidea 128meg video on a 1920x1200 screen and T7200 CPU. I suspect if I could get the damn HARDWARE tab back in audio settings, I could just select the proper Analog profile - just as is the case if running from a boot CD. Searched the web, no similar problems found... any help appreciated!

    Read the article

  • Source Control and SQL Development &ndash; Part 3

    - by Ajarn Mark Caldwell
    In parts one and two of this series, I have been specifically focusing on the latest version of SQL Source Control by Red Gate Software.  But I have been doing source-controlled SQL development for years, long before this product was available, and well before Microsoft came out with Database Projects for Visual Studio.  “So, how does that work?” you may wonder.  Well, let me share some of the details of how we do it where I work… The key to this approach is that everything is done via Transact-SQL script files; either natively written T-SQL, or generated.  My preference is to write all my code by hand, which forces you to become better at your SQL syntax.  But if you really prefer to use the Management Studio GUI to make database changes, you can still do that, and then you use the Generate Scripts feature of the GUI to produce T-SQL scripts afterwards, and store those in your source control system.  You can generate scripts for things like stored procedures and views by right-clicking on the database in the Object Explorer, and Choosing Tasks, Generate Scripts (see figure 1 to the left).  You can also do that for the CREATE scripts for tables, but that does not work when you have a table that is already in production, and you need to make just a simple change, such as adding a new column or index.  In this case, you can use the GUI to make the table changes, and then instead of clicking the Save button, click the Generate Change Script button (). Then, once you have saved the change script, go ahead and execute it on your development database to actually make the change.  I believe that it is important to actually execute the script rather than just click the Save button because this is your first test that your change script is working and you didn’t somehow lose a portion of the change. As you can imagine, all this generating of scripts can get tedious and tempting to skip entirely, so again, I would encourage you to just get in the habit of writing your own Transact-SQL code, and then it is just a matter of remembering to save your work, just like you are in the habit of saving changes to a Word or Excel document before you exit the program. So, now that you have all of these script files, what do you do with them?  Well, we organize ours into folders labeled ChangeScripts, Functions, Views, and StoredProcedures, and those folders are loaded into our source control system.  ChangeScripts contains all of the table and index changes, and anything else that is basically a one-time-only execution.  Of course you want to write your scripts with qualifying logic so that if a script were accidentally run more than once in a database, it would not crash nor corrupt anything; but these scripts are really intended to be run only once in a database. Once you have your initial set of scripts loaded into source control, then making changes, such as altering a stored procedure becomes a simple matter of checking out your CREATE PROCEDURE* script, editing it in SSMS, saving the change, executing the script in order to effect the change in your database, and then checking the script back in to source control.  Of course, this is where the lack of integration for source control systems within SSMS becomes an irritation, because this means that in addition to SSMS, I also have my source control client application running to do the check-out and check-in.  And when you have 800+ procedures like we do, that can be quite tedious to locate the procedure I want to change in source control, check it out, then locate the script file in my working folder, open it in SSMS, do the change, save it, and the go back to source control to check in.  Granted, it is not nearly as burdensome as, say, losing your source code and having to rebuild it from memory, or losing the audit trail that good source control systems provide.  It is worth the effort, and this is how I have been doing development for the last several years. Remember that everything that the SQL Server Management Studio does in modifying your database can also be done in plain Transact-SQL code, and this is what you are storing.  And now I have shown you how you can do it all without spending any extra money.  You already have source control, or can get free, open-source source control systems (almost seems like an oxymoron, doesn’t it) and of course Management Studio is free with your SQL Server database engine software. So, whether you spend the money on tools to make it easier, or not, you now have no excuse for not using source control with your SQL development. * In our current model, the scripts for stored procedures and similar database objects are written with an IF EXISTS…DROP… at the top, followed by the CREATE PROCEDURE… section, and that followed by a section that assigns permissions.  This allows me to run the same script regardless of whether the procedure previously existed in the database.  If the script was only an ALTER PROCEDURE, then it would fail the first time that procedure was deployed to a database, unless you wrote other code to stub it if it did not exist.  There are a few different ways you could organize your scripts for deployment, each with its own trade-offs, but I think it is absolutely critical that whichever way you organize things, you ensure that the same script is run throughout the deployment cycle, and do not allow customizations to creep in between TEST and PROD.  If you do, then you have broken the integrity of your deployment process because what you deployed to PROD was not exactly the same as what was tested in TEST, so you effectively have now released untested code into PROD.

    Read the article

  • Difference between ~/folder and /home/username/folder when creating a path in /etc/environment

    - by r0xx4nne
    I had an executable script on my ubuntu located on ~/project/ directory and I tried to add that path to /etc/environment . So , I edit the path to this PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:~/project/" . Then , I logout and login back , open the terminal as su and run the command to execute my script on that folder but the result is command not found. Then, I change the path in /etc/environment to PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/home/r0xx4nne/project/" and voila it works.Now i can run the executable script inside ~/project/ without fail under su command. My question is , what's the difference between ~/project and /home/r0xx4nne/project when it comes in case of creating a path in /etc/environment ? Why it happened to be like this? I am a newbie and I just want to know more . Thanks for any reply .

    Read the article

  • Why doesn't Wolfram Workbench work on 64-bit Ubuntu?

    - by Ian Hincks
    I have downloaded the shell script (Workbench_2.0.0_LINUX.sh), I have run it as root with it giving no complaints, relevant looking files have appeared in /usr/local/Wolfram/WolframWorkbench/2.0/ and it has created the executable "WolframWorkbench" in /usr/local/bin. However, when I run WolframWorkbench from terminal it spits out /usr/local/bin/WolframWorkbench: 46: exec: /usr/local/Wolfram/WolframWorkbench/2.0/WolframWorkbench: not found That file does indeed exist, and is executable. I have also tried running it directly, and I have also tried running the /usr/local/Wolfram/WolframWorkbench/2.0/Executables/WolframWorkbench too. Is there something I'm missing? (I am running Ubuntu 12.04 64bit with openjdk7)

    Read the article

  • Fullscreen windowed mode in id games

    - by Oli
    I run a TwinView, dual monitor system. I like to play games fullscreen on one of the monitors, not spanning both. With wine, this works by just setting it to desktop mode and setting the resolution to that of one screen. For OpenTTD, I used Compiz's Window Rules plugin. But I have a few native games that this doesn't work for. Today's experiment involved Prey (Doom 3 engine) but I've had similar issues with other ID engines. So in short: has anybody found a way of having Prey/OpenAreana/Doom3/etc run in windowed mode but with fullscreen decorations (that is to say, no borders and above the panel)?

    Read the article

  • Why is a small fixed vocabulary seen as an advantage to RESTful services?

    - by Matt Esch
    So, a RESTful service has a fixed set of verbs in its vocabulary. A RESTful web service takes these from the HTTP methods. There are some supposed advantages to defining a fixed vocabulary, but I don't really grasp the point. Maybe someone can explain it. Why is a fixed vocabulary as outlined by REST better than dynamically defining a vocabulary for each state? For example, object oriented programming is a popular paradigm. RPC is described to define fixed interfaces, but I don't know why people assume that RPC is limited by these contraints. We could dynamically specify the interface just as a RESTful service dynamically describes its content structure. REST is supposed to be advantageous in that it can grow without extending the vocabulary. RESTful services grow dynamically by adding more resources. What's so wrong about extending a service by dynamically specifying a per-object vocabulary? Why don't we just use the methods that are defined on our objects as the vocabulary and have our services describe to the client what these methods are and whether or not they have side effects? Essentially I get the feeling that the description of a server side resource structure is equivalent to the definition of a vocabulary, but we are then forced to use the limited vocabulary in which to interact with these resources. Does a fixed vocabulary really decouple the concerns of the client from the concerns of the server? I surely have to be concerned with some configuration of the server, this is normally resource location in RESTful services. To complain at the use of a dynamic vocabulary seems unfair because we have to dynamically reason how to understand this configuration in some way anyway. A RESTful service describes the transitions you are able to make by identifying object structure through hypermedia. I just don't understand what makes a fixed vocabulary any better than any self-describing dynamic vocabulary, which could easily work very well in an RPC-like service. Is this just a poor reasoning for the limiting vocabulary of the HTTP protocol?

    Read the article

  • Build Controller status Unavailable issue in TFS2010

    - by jehan
    I ran into this problem few days back, I was not able to run the builds because the Build Controller was showing Status as Unavailable. It was showing the below exception: There was no endpoint listening at http://fullmachinename:9191/Build/v3.0/Services/Controller/2 that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details. After trying out few things, I looked at below Build Service Properties.   Then, I did below modifications to the Build Services Properties: 1)      Changed the Local Build Service Endpoint(incoming) from http://machinename.domain.com:9191 to http://machinename:9191 2)      Changed the Connect to Team Project Collection (outgoing) from localhost to machine name. http://localhost:8080/tfs/defaultCollection to http://machinename:8080/tfs/DefaultCollection   After that Started the Build Services and it fixed the issue, the Build Controller was showing Available Status and was able to run the builds.

    Read the article

< Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >