Search Results

Search found 18728 results on 750 pages for 'setup deployment'.

Page 187/750 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • Lock ups, crashing, transferring files using TrueCrypt with iSCSI

    - by Anthony
    I have looked into this error and it seems that it hasn't been discussed yet - or at least I can't find any information relating. I'm having issues transferring files, usually larger files over a couple of hundred MB. Here is the setup: QNAP 410 as iSCSI Target with multiple LUNs. (CRC is turned on (Data Digest and Header Digest) Server 2003 with iSCSI Initiator version 2.08 - build 3825 (I'm copying files from anothe machine to shares on Server 2003 = into TrueCrypt volume ergo onto the NAS) I have mounted the LUN and formatted it with TrueCrypt using NTFS (Full format, not a quick one). What happens is some files, mainly RAR/Compressed files, appear as if they copy but fail. I've tested this in a number of ways and can repeat the process every time. So I thought to check transfer over iSCSI without TrueCrypt in between, a plain NTFS format - no problem at all. So it would seem TrueCrypt is at least part of the problem here. I haven't tried copying directly from the server yet, I will try that. I also haven't tried it without CRC but fail to see how that would affect this. I will update with my findings later. In the meantime does anyone have any ideas as to what could be wrong? Thanks for your time. Update: I copied a set of files, the ones I was having issues with, to the server then from there I copied those into two places within the TrueCrypt volume (Mounted on the NAS). A seperate directory create in the root of the volume The same initial directory I was using in the first instance Both worked fine. So it now seems clear that this is a link between TrueCrypt, iSCSI and Windows Shares. I say this because I originally setup the whole system using TrueCrypt volume files, not iSCSI. I changed it as it didn't suit my requirements - day wasted as well. While I had this setup though I copied my entire file set to the volume files and all files copied without error - over the network, from a pc, to the server where TrueCrypt had the volume files mounted from the NAS. I didn't bother turning off CRC on the iSCSI system as I highly doubt that is the cause in light of this finding. So any ideas?

    Read the article

  • 1 TB hdd and no space to create a partition for linux !

    - by rangalo
    I have a brand new Acer aspire 5811 with core i5 processor and all that. There is windows 7 Home Premium pre-installed on it. I want to install arch and setup a dual boot system. The problem is: Windows shows 4 partitions 14 MB UNKNOWN recovery partition 100MB NTFS System Reserved partition for Windows 7 448GB NTFS Windows 7 system partition 468GB NTFS Data partition for windows 7 But GpartedLive cd and also arch setup show 5 partitions 938Kb UNKOWN system reserved partition 14 MB UNKNOWN recovery partition 100MB NTFS System Reserved partition for Windows 7 448GB NTFS Windows 7 system partition 468GB unusable space Because of this, I cannot create another primary partition. Can any body guide me about how should I go for creating partition for installing arch ? Note: I need to keep windows 7 working. regards.

    Read the article

  • How do I properly set up and secure a production LAMP Server?

    - by Niklas
    It's very hard to find comprehensive information on this subject. Either I found short tutorials on how you perform the installation, as simple as "apt-get install apache2", or outdated tutorials. So I was hoping I could get some professional information from my fellow members of the Ubuntu community :D I have performed a normal Ubuntu Server 11.04 with LAMP, SAMBA and SSH installed through the system installation. But I'm having some trouble setting up virtual hosts and to make the system secure enough to expose the server to the web. I've somewhat followed this tutorial this far. I have 3 sites in /etc/apache2/sites-available which all looks like this except for different site names: <VirtualHost example.com> ServerAdmin webmaster@localhost ServerAlias www.edunder.se DocumentRoot /var/www/sites/example CustomLog /var/log/apache2/www.example.com-access.log combined </VirtualHost> And I have enabled them with the command a2ensite so I have symbolic links in /etc/apache2/sites-enabled. My /etc/hosts file has these lines: 127.0.0.1 localhost 127.0.1.1 Ubuntu.lan Ubuntu 127.0.0.1 localhost.localdomain localhost example.com www.example.com 127.0.0.1 localhost.localdomain localhost example2.com www.example2.com 127.0.0.1 localhost.localdomain localhost example3.com www.example3.com And I can only access one of them from the browser (I have lynx installed on the server for testing purposes) so I guess I haven't set them up properly :) How should I proceed to get a secure and proper setup? I also use MySQL and I think that this tutorial will be enough to set up SSH securely. Please help me understanding Apache configuration better since I'm new to setting up my own server (I've only run XAMPP earlier) and please advise regarding how I should setup a firewall as well :D

    Read the article

  • Tuning Default WorkManager - Advantages and Disadvantages

    - by Murali Veligeti
    Before discussing on Tuning Default WorkManager, lets have a brief introduction on What is Default WorkManger Before Weblogic Server 9.0 release, we had the concept of Execute Queues. WebLogic Server (before WLS 9.0), processing was performed in multiple execute queues. Different classes of work were executed in different queues, based on priority and ordering requirements, and to avoid deadlocks. In addition to the default execute queue, weblogic.kernel.default, there were pre-configured queues dedicated to internal administrative traffic, such as weblogic.admin.HTTP and weblogic.admin.RMI.Users could control thread usage by altering the number of threads in the default queue, or configure custom execute queues to ensure that particular applications had access to a fixed number of execute threads, regardless of overall system load. From WLS 9.0 release onwards WebLogic Server uses is a single thread pool (single thread pool which is called Default WorkManager), in which all types of work are executed. WebLogic Server prioritizes work based on rules you define, and run-time metrics, including the actual time it takes to execute a request and the rate at which requests are entering and leaving the pool.The common thread pool changes its size automatically to maximize throughput. The queue monitors throughput over time and based on history, determines whether to adjust the thread count. For example, if historical throughput statistics indicate that a higher thread count increased throughput, WebLogic increases the thread count. Similarly, if statistics indicate that fewer threads did not reduce throughput, WebLogic decreases the thread count. This new strategy makes it easier for administrators to allocate processing resources and manage performance, avoiding the effort and complexity involved in configuring, monitoring, and tuning custom executes queues. The Default WorkManager is used to handle thread management and perform self-tuning.This Work Manager is used by an application when no other Work Managers are specified in the application’s deployment descriptors. In many situations, the default Work Manager may be sufficient for most application requirements. WebLogic Server’s thread-handling algorithms assign each application its own fair share by default. Applications are given equal priority for threads and are prevented from monopolizing them. The default work-manager, as its name tells, is the work-manager defined by default.Thus, all applications deployed on WLS will use it. But sometimes, when your application is already in production, it's obvious you can't take your EAR / WAR, update the deployment descriptor(s) and redeploy it.The default work-manager belongs to a thread-pool, as initial thread-pool comes with only five threads, that's not much. If your application has to face a large number of hits, you may want to start with more than that.Well, that's quite easy. You have  two option to do so.1) Modify the config.xmlJust add the following line(s) in your server definition : <server> <name>AdminServer</name> <self-tuning-thread-pool-size-min>100</self-tuning-thread-pool-size-min> <self-tuning-thread-pool-size-max>200</self-tuning-thread-pool-size-max> [...] </server> 2) Adding some JVM parameters Add the following system property in setDomainEnv.sh/setDomainEnv.cmd or startWebLogic.sh/startWebLogic.cmd : -Dweblogic.threadpool.MinPoolSize=100 -Dweblogic.threadpool.MaxPoolSize=100 Reboot WLS and see the option has been taken into account . Disadvantage: So far its fine. But here there is an disadvantage in tuning Default WorkManager. Internally Weblogic Server has many work managers configured for different types of work.  if we run out of threads in the self-tuning pool(because of system property -Dweblogic.threadpool.MaxPoolSize) due to being undersized, then important work that WLS might need to do could be starved.  So, while limiting the self-tuning would limit the default WorkManager and internally it also limits all other internal WorkManagers which WLS uses.So the best alternative is to override the default WorkManager that means creating a WorkManager for the Application and assign the WorkManager for the application instead of tuning the Default WorkManager.

    Read the article

  • Shrink Partition on Production Server

    - by Campo
    SO our production server was only setup with one large partition. I have setup a standby server and properly partitioned it. Now the boss wants the production environment's partition shrunk. It is an HP DL380 G5 We have 4 hot swap drives in a raid 5. How best should I go about doing this. Seems like a bad idea to me. Should I use windows or HP to do the partitioning? What should I be aware of in a production environment? The idea is to put the site (Inetpub) on a separate partition instead of the C: drive. How much downtime should I expect? Is this a terrible idea? Anything else I have missed?

    Read the article

  • Installing a new SQL Server instance fails

    - by Rubio
    I've previously in my setup installed SQL Server Express 2005. Now I've switched to SQL Server Express 2008. I updated the command line parameters to those documented for the latter. If the comp already has SQL Server Express 2008 installed, my installer should create a new instance. The command line parameters are as follows: /ACTION=Install /FEATURES=SQLEngine /QS /INSTANCENAME=ABCD /SECURITYMODE=SQL /SAPWD=CunningPassword The requested instance name does not exist on the target machine. This will end in an error -2068643838. The logs show the following error: "No features were installed during the setup execution. The requested features may already be installed." If I remove the /QS parameter and try to install interactively, I'll get as far as the Feature Selection page. The UI shows three options, Instance Features, Shared Features and Redistributable Features. Whatever I select, clicking Next results in the same error (There are validation errors on this page). Any ideas anyone? Thanks, -- Rubio

    Read the article

  • How can I force Parallels' networking to obtain an IP through a wireless router?

    - by RLH
    Here is my setup. I have a Macbook, Thunderbolt display and an Ethernet connection plugged into the Thunderbolt display. During the day, most of my network use can (and should) operate across the ethernet associated with my display. However, I also need to be able to connect up to a wireless router. This hasn't been a problem on the Mac OS X side, but the program that I need to run on the router has to obtain an IP address from the wireless access point. Considering my current setup, how can I leave it so that I can access the internet in OS X, yet have my Window 7 instance running in Parallels, get it's assigned IP address from a wireless router that my Mac is also connected to? I've fiddled around with the Parallel's network settings for an hour, and I can't get Parallel's to see the router, even though my Mac is certainly connected to it.

    Read the article

  • MySQL cluster: Error after 1 data node is shutdown and started again

    - by nitins
    MySQL cluster: Error after 1 data node is shutdown and started again. We have configured MySQL cluster(version 7.1) with 2 sql/data nodes. We are using table space instead of in-memory clustering. The setup was working fine. So to test the setup I shutdown one data node, updated a table and and again stated the stopped node. Its giving this error and not starting. Any ideas ? Forced node shutdown completed. Occured during startphase 5. Caused by error 2306: 'Pointer too large(Internal error, programming error or missing error message, please report a bug). Temporary error, restart node'.

    Read the article

  • Visual Studio 2010 Best Practices

    - by Etienne Tremblay
    I’d like to thank Packt for providing me with a review version of Visual Studio 2010 Best Practices eBook. In fairness I also know the author Peter having seen him speak at DevTeach on many occasions.  I started by looking at the table of content to see what this book was about, knowing that “best practices” is a real misnomer I wanted to see what they were.  I really like the fact that he starts the book by really saying they are not really best practices but actually recommend practices.  As a Team Foundation Server user I found that chapter 2 was more for the open source crowd and I really skimmed it.  The portion on Branching was well documented, although I’m not a fan of the testing branch myself, but the rest was right on. The section on merge remote changes (bring the outside to you) paradigm is really important and was touched on. Chapter 3 has good solid practices on low level constructs like generics and exceptions. Chapter 4 dives into architectural practices like decoupling, distributed architecture and data based architecture.  DTOs and ORMs are touched on briefly as is NoSQL. Chapter 5 is about deployment and is really a great primer on all the “packaging” technologies like Visual Studio Setup and Deployment (depreciated in 2012), Click Once and WIX the major player outside of commercial solutions.  This is a nice section on how to move from VSSD to WIX this is going to be important in the coming years due to the fact that VS 2012 doesn’t support VSSD. In chapter 6 we dive into automated testing practices, including test coverage, mocking, TDD, SpecDD and Continuous Testing.  Peter covers all those concepts really nicely albeit succinctly. Being a book on recommended practices I find this is really good. I really enjoyed chapter 7 that gave me a lot of great tips to enhance my Visual Studio “experience”.  Tips on organizing projects where good.  Also even though I knew about configurations I like that he put that in there so you can move all your settings to another machine, a lot of people don’t know about that. Quick find and Resharper are also briefly covered.  He touches on macros (depreciated in 2012).  Finally he touches on Continuous Integration a very important concept in today’s ALM landscape. Chapter 8 is all about Parallelization, threads, Async, division of labor, reactive extensions.  All those concepts are touched on and again generalized approaches to those modern problems are giving.       Chapter 9 goes into distributed apps, the most used and accepted practice in the industry for .NET projects the chapter tackles concepts like Scalability, Messaging and Cloud (the flavor of the month of distributed apps, although I think this will stick ;-)).  He also looks a protocols TCP/UDP and how to debug distributed apps.  He touches on logging and health monitoring. Chapter 10 tackles recommended practices for web services starting with implementing WCF services, which goes into all sort of goodness like how to host in IIS or self-host.  How to manual test WCF services, also a section on authentication and authorization.  ASP.NET Web services are also touched on in that chapter All in all a good read, nice tips and accepted practices.  I like the conciseness of the subjects and Peter touches on a lot of things in this book and uses a lot of the current technologies flavors to explain the concepts.   Cheers, ET

    Read the article

  • Running PHP 5.2 FastCGI + Apache on CentOS 5 issue

    - by Goran
    I am trying to setup 2 versions of PHP on Centos 5.9 using this tutorial: http://linuxplayer.org/2011/05/intall-multiple-version-of-php-on-one-server. I have followed I have installed default 5.4.19, and I was trying to setup another 5.2.17 PHP version to be run with Fast CGI and I followed the second part completely. However, when I try to run http://web2.example.com it returns 500 error message. In the apache log there are only 2 lines that repeat: [notice] mod_fcgid: call /var/www/web2/index.php with wrapper /usr/local/php52/bin/fcgiwrapper.sh and [notice] mod_fcgid: process /var/www/web2/index.php(25250) exit(server exited), terminated by calling exit(), return code: 255 Please note that I had to add .php at the and of the FCGIWrapper because apache would not start without it: FCGIWrapper /usr/local/php52/bin/fcgiwrapper.sh .php Also please note that http://web1.example.com with PHP 5.4.19 is working absolutely fine. Please help. Thank you very much in advance.

    Read the article

  • Multiple audio sources on a single gameObject in unity

    - by angryInsomniac
    So, I have an audio system set up wherein I have loaded all my audio clips centrally and play them on demand by passing the requesting audioSource into the sound manager. However, there is a complication wherein if I want to overlay multiple looping sounds, I need to have multiple audio sources on an object, which is fine , so I created two in my script instantiated them and played my clips on them and then the world went crazy. For some reason, when I create two audio Sources in an object only the latest one is ever used, even if I explicitly keep objects separated, playing a clip on one or the other plays the clip on the last one that was created, furthermore, either this last one is not created in the right place or somehow messes with the rolloff rules because I can hear it all across my level, havign just one source works fine, but putting a second one on it causes shit to go batshit insane. Does anyone know the reason / solution for this ? Some pseudocode : guardSoundsSource = (AudioSource)gameObject.AddComponent("AudioSource"); guardSoundsSource.name = "Guard_Sounds_source"; // Setup this source guardThrusterSource = (AudioSource)gameObject.AddComponent("AudioSource"); guardThrusterSource.name = "Guard_Thruster_Source"; // setup this source // play using custom Sound manager soundMan.soundMgr.playOnSource(guardSoundsSource,"Guard_Idle_loop" ,true,GameManager.Manager.PlayerType); // this method prints out the name of the source the sound was to be played on and it always shows "Guard_Thruster_Source" even on the "Guard_Idle_loop" even though I clearly told it to use "Guard_Sounds_source"

    Read the article

  • DAO/Webservice Consumption in Web Application

    - by Gavin
    I am currently working on converting a "legacy" web-based (Coldfusion) application from single data source (MSSQL database) to multi-tier OOP. In my current system there is a read/write database with all the usual stuff and additional "read-only" databases that are exported daily/hourly from an Enterprise Resource Planning (ERP) system by SSIS jobs with business product/item and manufacturing/SCM planning data. The reason I have the opportunity and need to convert to multi-tier OOP is a newer more modern ERP system is being implemented business wide that will be a complete replacement. This newer ERP system offers several interfaces for third party applications like mine, from direct SQL access to either a dotNet web-service or a SOAP-like web-service. I have found several suitable frameworks I would be happy to use (Coldspring, FW/1) but I am not sure what design patterns apply to my data access object/component and how to manage the connection/session tokens, with this background, my question has the following three parts: Firstly I have concerns with moving from the relative safety of a SSIS job that protects me from downtime and speed of the ERP system to directly connecting with one of the web services which I note seem significantly slower than I expected (simple/small requests often take up to a whole second). Are there any design patterns I can investigate/use to cache/protect my data tier? It is my understanding data access objects (the component that connects directly with the web services and convert them into the data types I can then work with in my Domain Objects) should be singletons (and will act as an Adapter/Facade), am I correct? As part of the data access object I have to setup a connection by username/password (I could set up multiple users and/or connect multiple times with this) which responds with a session token that needs to be provided on every subsequent request. Do I do this once and share it across the whole application, do I setup a new "connection" for every user of my application and keep the token in their session scope (might quickly hit licensing limits), do I set the "connection" up per page request, or is there a design pattern I am missing that can manage multiple "connections" where a requests/access uses the first free "connection"? It is worth noting if the ERP system dies I will need to reset/invalidate all the connections and start from scratch, and depending on which web-service I use might need manually close the "connection/session"

    Read the article

  • Building a new PC, Installing XP, blue screen of death

    - by Tim
    I got a gigabyte barebones kit and am installing windows-XP(SP1) and the initial setup works, then it restarts and goes into the second phase of the setup. Then when installing components (I think that's what it says) it gets half way done and comes up with a blue screen saying IRQL_NOT_LESS_OR_EQUAL. BUT! I had gotten past that by installing windows-XP media center addition Now I am trying to install the drivers for my Asus ATI Radeon 5770 graphics card and I get another blue screen of death that doesnt give much info something about address x0000005. Would you think there is something wrong with something in my system or do you think if I got windows 7 that would take care of things? Sorry for probably not giving enough info. Here is what I have MotherBoard - Gigabyte S-series GA-H55M-S2(v) PSU - Ultra 500 watt atx HDD - Sata serial ATA Seagate Baracudda 7200 CPU - Intel i3 Memory - 4gig crucial Graphics Card - Asus ATI Radeon 5770 1Gig DDR5

    Read the article

  • consequences of changing uid/gid on snow leopard

    - by Peter Carrero
    ok, so I introduced a Mac laptop to my home network of Kubuntu hosts and Fedora servers. Currently I don't have NIS or LDAP setup (I got only 2 users) and I just manually setup the UID/GID on the hosts. I would like to run the following command on my Macbook: dscl . -change /Users/me UniqueID 501 1000 dscl . -change /Users/me PrimaryGroupID 20 503 chown -R 1000:503 /Users/me dscl . append /Groups/staff GroupMembership me Before I go on to hose my new Mac, I would like to know if this is the right thing to do and, if so, what are the adverse consequences I may have. Thanks.

    Read the article

  • Logstash shipper & server on the samebox

    - by keftes
    I'm trying to setup a central logstash configuration. However I would like to be sending my logs through syslog-ng and not third party shippers. This means that my logstash server is accepting via syslog-ng all the logs from the agents. I then need to install a logstash process that will be reading from /var/log/syslog-clients/* and grabbing all the log files that are sent to the central log server. These logs will then be sent to redis on the same VM. In theory I need to also configure a second logstash process that will read from redis and start indexing the logs and send them to elasticsearch. My question: Do I have to use two different logstash processes (shipper & server) even if I am in the same box (I want one log server instance)? Is there any way to just have one logstash configuration and have the process read from syslog-ng --- write to redis and also read from redis --- output to elastic search ? Diagram of my setup: [client]-------syslog-ng--- [log server] ---syslog-ng <----logstash-shipper --- redis <----logstash-server ---- elastic-search <--- kibana

    Read the article

  • IIS in VirtualBox serving files from shared Ubuntu folder

    - by queen3
    Here's the problem: I want to use Ubuntu. But I need to develop ASP.NET (MVC) sites. So I setup VirtualBox with Win2003 and IIS6. But I would prefer my working files to be located in my Ubuntu home folder. So I setup shared folder in VirtualBox and make IIS6 virtual directory work from there. The problem is, IIS6 can't do this. Whatever I try (mapped drive, network uri path) I get different IIS errors: can't access folder (for mapped drive), can't monitor file system changes (\vboxsvr share path), and so on. Is there a way for IIS6 in virtual machine to configure virtual application folder to be on host machine (Ubuntu) - be it shared folder, mapped drive, smb share, or whatever?

    Read the article

  • Powershell: Install-dotNET4 function

    - by marc dekeyser
    This function will download and install ,NET 4.0. It uses the Get-Framework-Versions function to determine if the installation is necessary or not. Internet Connectivity will be required as the script auto downloads the setup file (and sleeps for 360 seconds... I had a function in there to monitor for install completion at first, turns out the setup file spawns so many childprocesses the function just got confused and locked up -_-)Alternatively you could drop the installation file in the folder specified on the $folderPath variable too. That will skip the download and use the file. This function easily adapts in to other versions f.e. I use it for Powershell 3 installs as well!Function install-dotNet4 () {    if(($InstalledDotNET -eq "4.0") -or ($InstalledDotNET -eq "4.0c")){        write-host ".NET 4.0 Framework is already installed" -foregroundcolor Green    } else{            #set a var for the folder you are looking for        $folderPath = 'C:\Temp'        #Check if folder exists, if not, create it        if (Test-Path $folderpath){            Write-Host "The folder $folderPath exists." -ForeGroundColor Green        } else{            Write-Host "The folder $folderPath does not exist, creating..." -NoNewline -ForegroundColor Red            New-Item $folderpath -type directory | Out-Null            Write-Host " - done!" -ForegroundColor Green        }        # Check if file exists, if not, download it        $file = $folderPath+"\dotNetFx40_Full_x86_x64.exe"        if (Test-Path $file){            write-host "The file $file exists." -ForeGroundColor Green        } else {            #Download Microsoft .Net 4.0 Framework            Write-Host "Downloading Microsoft .Net 4.0 Framework..." -nonewline -ForeGroundColor DarkYellow            $clnt = New-Object System.Net.WebClient            $url = "http://download.microsoft.com/download/9/5/A/95A9616B-7A37-4AF6-BC36-D6EA96C8DAAE/dotNetFx40_Full_x86_x64.exe"            $clnt.DownloadFile($url,$file)            Write-Host " - done!" -ForegroundColor Green        }        #Install Microsoft .Net Framework        Write-Host "Installing Microsoft .Net Framework..." -nonewline -ForegroundColor DarkYellow        $dotNET4 = $folderPath+"\dotNetFx40_Full_x86_x64.exe /quiet /norestart"        Invoke-Expression $dotNET4        write-host " - done!" -ForegroundColor Green        start-sleep -seconds 360    }}

    Read the article

  • Planning home network

    - by gakhov
    I'm planning to setup my home network from scratch and want to ask professional opinions or tips. My home is connected to Internet with a cable connection (100 Mb/s). The devices I would like to connect are VoIP phone (RJ-45), TV (WiFi/LAN), 3 laptops (WiFi), 2 smartphones (WiFi), an iPad (WiFi), a Kindle (WiFi), a network printer and, probably, a home media storage (WiFi/LAN). As you can see, the most load will be on WiFi connections (probably, even if TV supports WiFi it's better to connect it by LAN?). So, I need help to choose the best router (or combination of routers) to support stable connections for all these devices and minimize the total number of routers/adapters. I like how Cisco/Linksys devices were working for me in the past, so preferably (but not obligatorily) I want to setup network with their solutions. Any thoughts?

    Read the article

  • iptables to play nice with tor and ntpd

    - by directedition
    I'm setting up a server to operate as a tor relay and nothing else. I setup iptables to only allow talk on port 9001 and it worked fine, but there was an issue, the clock needs to be properly set and maintained for the relay to work properly, so I needed ntpd setup and running, but for some reason I can't get iptables to work as I want it. I'm trying to have it allow only tor and ntpd to talk over the network, but when I set it up to allow port 123 using udp, suddenly it ignores my -A OUTPUT ! -s 127.0.0.1 -j DROP and allows everything through. How should I go about this? Please excuse my ignorance, I've brand new to iptables.

    Read the article

  • Problem with XeTeX (LaTeX) and system fonts

    - by mghg
    I have started to use an enterprise specific class for LaTeX, but have got a problem with usage system fonts in Ubuntu. The class uses the fontspec package, I have therefore been instructed to use XeTeX (i.e. the command xelatex instead of latex or pdflatex). However, the command xelatex testfile.tex results in the following message: ! Package xkeyval Error: `TeX' undefined in families `Ligatures'. See the xkeyval package documentation for explanation. Type H <return> for immediate help. ... l.61 \newfontfamily\headfont{Arial} ? The class has previously been used on Mac and Windows and the font setup is as follows: \newfontfamily\headfont{Arial} \newcommand\texthead[1]{\headfont #1} \setromanfont{Georgia} \setmainfont{Georgia} \setsansfont[Scale=MatchLowercase]{Verdana} It has been suggested that since XeTeX makes use of system fonts and the class file has worked flawlessly on Mac and Windows, the problem might be that Arial is not a name used in Ubuntu. I have tried to exchange Arial with Ubuntu Light in the setup code above, but that have not been any improvement. Any suggestions please on how to move forward?

    Read the article

  • Is real-time or synchronous replication possible over WAN link?

    - by johnnyb10
    The company I work for is looking to implement truly real-time file replication with file locking over a WAN link that spans over 2000 miles. We currently have a 16-drive SAN setup in our east coast office. We also have an office out in Colorado that will have the same exact SAN setup. The idea is to have those two SANs contain the same exact data at all times, which will allow us to work with the same data pool, and which will also provide use with an offsite backup solution, should a failure occur on either end. We're running Server 2008. The objective is to enable users in the east coast office to work on files and have those changes be instantly updated on the Colorado SAN as well. We also need there to be file locking so that there will be no conflicts or overwritten changes if users attempt to work on the same file. Is this scenario even possible, at speeds that would make the files usable? And if so, what software would we need to pull this off? As I understand it, DFS-R does not provide file locking, so if we used that, we would need to go with a third-party product like Peerlock. But I don't even know if DFS-R is an option. Can it replicate quickly enough over a WAN link? Can any product? It seems that if we were to use synchronous replication, the programs would be unacceptably slow, as every write would have to wait for confirmation from the other end of the link. But if we used asynchronous replication, what kind of latency would we be looking at? There is a product from GlobalScape called WAFS that claims to provide "File coherence with real-time file locking, file release, and synchronization" and says that "As files are modified, changes are mirrored instantly using intelligent byte-level differencing to minimize the impact on network bandwidth". So this sounds like synchronous replication, but that doesn't even seem possible, given physical limitations such as the speed of light. If anyone has any experience with this kind of setup, or knows whether it's even possible, I'd appreciate your input and suggestions, including recommendations for software that we should check out.

    Read the article

  • SVN Checkout URL - fresh install

    - by Webnet
    I just setup SVN on a server that is running Ubuntu server as a fresh install. I've got it up and running but am having difficult determining how to connect to it. I'm trying to do an import using the local IP address: http://IP/RepositoryName but it's saying it can't resolve the IP. I'm wondering if there's something on the server I need to setup. I have not modified dav_svn.conf because there is another server here that is running SVN (I'm migrating it to a new server) and it's dav_svn.conf is not modified. The current working SVN has a subdomain associated with the IP location of the server but doesn't do anything special with the ports as far as I can tell. I'm getting this error via RapidSVN when I try to import... Error: Error while performing action: OPTIONS of 'http://IP/RepositoryName': could not connect to server (http://IP) Any help would be appreciated

    Read the article

  • Constant game speed independent of variable FPS in OpenGL with GLUT?

    - by Nazgulled
    I've been reading Koen Witters detailed article about different game loop solutions but I'm having some problems implementing the last one with GLUT, which is the recommended one. After reading a couple of articles, tutorials and code from other people on how to achieve a constant game speed, I think that what I currently have implemented (I'll post the code below) is what Koen Witters called Game Speed dependent on Variable FPS, the second on his article. First, through my searching experience, there's a couple of people that probably have the knowledge to help out on this but don't know what GLUT is and I'm going to try and explain (feel free to correct me) the relevant functions for my problem of this OpenGL toolkit. Skip this section if you know what GLUT is and how to play with it. GLUT Toolkit: GLUT is an OpenGL toolkit and helps with common tasks in OpenGL. The glutDisplayFunc(renderScene) takes a pointer to a renderScene() function callback, which will be responsible for rendering everything. The renderScene() function will only be called once after the callback registration. The glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0) takes the number of milliseconds to pass before calling the callback processAnimationTimer(). The last argument is just a value to pass to the timer callback. The processAnimationTimer() will not be called each TIMER_MILLISECONDS but just once. The glutPostRedisplay() function requests GLUT to render a new frame so we need call this every time we change something in the scene. The glutIdleFunc(renderScene) could be used to register a callback to renderScene() (this does not make glutDisplayFunc() irrelevant) but this function should be avoided because the idle callback is continuously called when events are not being received, increasing the CPU load. The glutGet(GLUT_ELAPSED_TIME) function returns the number of milliseconds since glutInit was called (or first call to glutGet(GLUT_ELAPSED_TIME)). That's the timer we have with GLUT. I know there are better alternatives for high resolution timers, but let's keep with this one for now. I think this is enough information on how GLUT renders frames so people that didn't know about it could also pitch in this question to try and help if they fell like it. Current Implementation: Now, I'm not sure I have correctly implemented the second solution proposed by Koen, Game Speed dependent on Variable FPS. The relevant code for that goes like this: #define TICKS_PER_SECOND 30 #define MOVEMENT_SPEED 2.0f const int TIMER_MILLISECONDS = 1000 / TICKS_PER_SECOND; int previousTime; int currentTime; int elapsedTime; void renderScene(void) { (...) // Setup the camera position and looking point SceneCamera.LookAt(); // Do all drawing below... (...) } void processAnimationTimer(int value) { // setups the timer to be called again glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0); // Get the time when the previous frame was rendered previousTime = currentTime; // Get the current time (in milliseconds) and calculate the elapsed time currentTime = glutGet(GLUT_ELAPSED_TIME); elapsedTime = currentTime - previousTime; /* Multiply the camera direction vector by constant speed then by the elapsed time (in seconds) and then move the camera */ SceneCamera.Move(cameraDirection * MOVEMENT_SPEED * (elapsedTime / 1000.0f)); // Requests to render a new frame (this will call my renderScene() once) glutPostRedisplay(); } void main(int argc, char **argv) { glutInit(&argc, argv); (...) glutDisplayFunc(renderScene); (...) // Setup the timer to be called one first time glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0); // Read the current time since glutInit was called currentTime = glutGet(GLUT_ELAPSED_TIME); glutMainLoop(); } This implementation doesn't fell right. It works in the sense that helps the game speed to be constant dependent on the FPS. So that moving from point A to point B takes the same time no matter the high/low framerate. However, I believe I'm limiting the game framerate with this approach. Each frame will only be rendered when the time callback is called, that means the framerate will be roughly around TICKS_PER_SECOND frames per second. This doesn't feel right, you shouldn't limit your powerful hardware, it's wrong. It's my understanding though, that I still need to calculate the elapsedTime. Just because I'm telling GLUT to call the timer callback every TIMER_MILLISECONDS, it doesn't mean it will always do that on time. I'm not sure how can I fix this and to be completely honest, I have no idea what is the game loop in GLUT, you know, the while( game_is_running ) loop in Koen's article. But it's my understanding that GLUT is event-driven and that game loop starts when I call glutMainLoop() (which never returns), yes? I thought I could register an idle callback with glutIdleFunc() and use that as replacement of glutTimerFunc(), only rendering when necessary (instead of all the time as usual) but when I tested this with an empty callback (like void gameLoop() {}) and it was basically doing nothing, only a black screen, the CPU spiked to 25% and remained there until I killed the game and it went back to normal. So I don't think that's the path to follow. Using glutTimerFunc() is definitely not a good approach to perform all movements/animations based on that, as I'm limiting my game to a constant FPS, not cool. Or maybe I'm using it wrong and my implementation is not right? How exactly can I have a constant game speed with variable FPS? More exactly, how do I correctly implement Koen's Constant Game Speed with Maximum FPS solution (the fourth one on his article) with GLUT? Maybe this is not possible at all with GLUT? If not, what are my alternatives? What is the best approach to this problem (constant game speed) with GLUT? I originally posted this question on Stack Overflow before being pointed out about this site. The following is a different approach I tried after creating the question in SO, so I'm posting it here too. Another Approach: I've been experimenting and here's what I was able to achieve now. Instead of calculating the elapsed time on a timed function (which limits my game's framerate) I'm now doing it in renderScene(). Whenever changes to the scene happen I call glutPostRedisplay() (ie: camera moving, some object animation, etc...) which will make a call to renderScene(). I can use the elapsed time in this function to move my camera for instance. My code has now turned into this: int previousTime; int currentTime; int elapsedTime; void renderScene(void) { (...) // Setup the camera position and looking point SceneCamera.LookAt(); // Do all drawing below... (...) } void renderScene(void) { (...) // Get the time when the previous frame was rendered previousTime = currentTime; // Get the current time (in milliseconds) and calculate the elapsed time currentTime = glutGet(GLUT_ELAPSED_TIME); elapsedTime = currentTime - previousTime; /* Multiply the camera direction vector by constant speed then by the elapsed time (in seconds) and then move the camera */ SceneCamera.Move(cameraDirection * MOVEMENT_SPEED * (elapsedTime / 1000.0f)); // Setup the camera position and looking point SceneCamera.LookAt(); // All drawing code goes inside this function drawCompleteScene(); glutSwapBuffers(); /* Redraw the frame ONLY if the user is moving the camera (similar code will be needed to redraw the frame for other events) */ if(!IsTupleEmpty(cameraDirection)) { glutPostRedisplay(); } } void main(int argc, char **argv) { glutInit(&argc, argv); (...) glutDisplayFunc(renderScene); (...) currentTime = glutGet(GLUT_ELAPSED_TIME); glutMainLoop(); } Conclusion, it's working, or so it seems. If I don't move the camera, the CPU usage is low, nothing is being rendered (for testing purposes I only have a grid extending for 4000.0f, while zFar is set to 1000.0f). When I start moving the camera the scene starts redrawing itself. If I keep pressing the move keys, the CPU usage will increase; this is normal behavior. It drops back when I stop moving. Unless I'm missing something, it seems like a good approach for now. I did find this interesting article on iDevGames and this implementation is probably affected by the problem described on that article. What's your thoughts on that? Please note that I'm just doing this for fun, I have no intentions of creating some game to distribute or something like that, not in the near future at least. If I did, I would probably go with something else besides GLUT. But since I'm using GLUT, and other than the problem described on iDevGames, do you think this latest implementation is sufficient for GLUT? The only real issue I can think of right now is that I'll need to keep calling glutPostRedisplay() every time the scene changes something and keep calling it until there's nothing new to redraw. A little complexity added to the code for a better cause, I think. What do you think?

    Read the article

  • AWS SSL Load Balancer

    - by Jay Francis
    OK, I am looking for some pointers. Basically I have a white-label app/site that will allow users to setup their own domain to use for their customer front-end. We have 2 dedicated servers and a load balancer. The problem is SSL, we were thinking about using AWS ELB to handle the SSL loadbalancing, but cant seem to figure out if it will properly handle it, it seems to be setup to work with EC2 instances, but we are using externally hosted servers via a loadbalancer. A blog post by AWS looks similar to what we need but it only seems to work with EC2 instances. http://aws.typepad.com/aws/2011/08/elastic-load-balancer-ssl-support-options.html Anyone had experience setting ELS SSL load balancers up to work with external servers?

    Read the article

  • setting up rhel 5.x RPM build server for mortal users

    - by Chen Levy
    My task is to setup a RHEL 5.x build host, that can build RPMs for mortal users. On RHEL 6.x with rpm version 4.8, I have in /usr/lib/macros: # Path to top of build area. %_topdir %{getenv:HOME}/rpmbuild On RHEL 5.x with rpm version 4.4, the %{getevn:HOME} is not available. I know that I can use /home/someuser/.rpmmacros: %_topdir /home/someuser/rpmbuild and this will work for that user, however I don't want to do this for every user separately. Moreover, since .rpmmacro will not expand ${HOME} or ~ I suspect it is unsafe to use those. This in turn make /etc/skel unstable for this task (or so I suspect). So in short, my question is: How to setup RHEL 5.x host that allow all users to build RPM packages in their home directory?

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >