Search Results

Search found 7442 results on 298 pages for 'dynamic allocation'.

Page 188/298 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • Error while compiling Hello world program for CUDA

    - by footy
    I am using Ubuntu 12.10 and have sucessfully installed CUDA 5.0 and its sample kits too. I have also run sudo apt-get install nvidia-cuda-toolkit Below is my hello world program for CUDA: #include <stdio.h> /* Core input/output operations */ #include <stdlib.h> /* Conversions, random numbers, memory allocation, etc. */ #include <math.h> /* Common mathematical functions */ #include <time.h> /* Converting between various date/time formats */ #include <cuda.h> /* CUDA related stuff */ __global__ void kernel(void) { } /* MAIN PROGRAM BEGINS */ int main(void) { /* Dg = 1; Db = 1; Ns = 0; S = 0 */ kernel<<<1,1>>>(); /* PRINT 'HELLO, WORLD!' TO THE SCREEN */ printf("\n Hello, World!\n\n"); /* INDICATE THE TERMINATION OF THE PROGRAM */ return 0; } /* MAIN PROGRAM ENDS */ The following error occurs when I compile it with nvcc -g hello_world_cuda.cu -o hello_world_cuda.x /tmp/tmpxft_000033f1_00000000-13_hello_world_cuda.o: In function `main': /home/adarshakb/Documents/hello_world_cuda.cu:16: undefined reference to `cudaConfigureCall' /tmp/tmpxft_000033f1_00000000-13_hello_world_cuda.o: In function `__cudaUnregisterBinaryUtil': /usr/include/crt/host_runtime.h:172: undefined reference to `__cudaUnregisterFatBinary' /tmp/tmpxft_000033f1_00000000-13_hello_world_cuda.o: In function `__sti____cudaRegisterAll_51_tmpxft_000033f1_00000000_4_hello_world_cuda_cpp1_ii_b81a68a1': /tmp/tmpxft_000033f1_00000000-1_hello_world_cuda.cudafe1.stub.c:1: undefined reference to `__cudaRegisterFatBinary' /tmp/tmpxft_000033f1_00000000-1_hello_world_cuda.cudafe1.stub.c:1: undefined reference to `__cudaRegisterFunction' /tmp/tmpxft_000033f1_00000000-13_hello_world_cuda.o: In function `cudaError cudaLaunch<char>(char*)': /usr/lib/nvidia-cuda-toolkit/include/cuda_runtime.h:958: undefined reference to `cudaLaunch' collect2: ld returned 1 exit status I am also making sure that I use gcc and g++ version 4.4 ( As 4.7 there is some problem with CUDA)

    Read the article

  • Elantech touchpad: Right + Left click doesn't work

    - by Robert Kilar
    This is very common problem and till yet unresolved. The Elantech driver disables right button + left button touchpad click. In the result you cannot aim and shoot in games also you cannot code that kind of interaction in your applications. Driver detects LMB+RMB click but it somehow filters it. I could not find correct entry in the registry to disable that obviously horrible setting. Please notice: 1. I have the latest drivers the problem existed with really old ones, a year old, on Windows 7, 8 and 8.1 ones and current. 2. It has nothing to do with the Windows or hardware but only with the Elantech driver settings. 3. Driver detects LMB+RMB click - it is shown on a dynamic icon on a task bar also uninstalling drivers fix the problem, but then you can't use your touchpad fully.

    Read the article

  • image analysis and 64bit OS

    - by picciopiccio
    I developed a C# application that makes use of Congex vision library (VPro). My application is developed with Visual Studio 2008 Pro on a 32bit Windows PC with 3GB of RAM. During the startup of application I see that a large amount of memory is allocated. So far so good, but when I add many and many vision elaboration the memory allocation increases and a part of application (only Cognex OCX) stops working well. The rest of application stills to work (working threads, com on socket....) I did whatever I could to save memory, but when the memory allocated is about 700MB I begin to have the problems. A note on the documentation of Cognex library tells that /LARGEADDRESSWARE is not supported. Anyway I'm thinking to try the migration of my app on win64 but what do I have to do? Can I simply use a processor with 64bit and windows 64bit without recompiling my application that would remain a 32bit application to take advantage of 64bit ? Or I should recompile my application ? If I don't need to recompile my application, can I link it with 64bit Congnex library? If I have to recompile my application, is it possible to cross compile the application so that my develop suite is on a 32bit PC? Every help will be very appreciated!! Thank in advance

    Read the article

  • how to manage a "resource" array efficiently

    - by Haiyuan Zhang
    The senario of my question is that one need to use a fixed size of array to keep track of certain number of "objects" . The object here can be as simply as a integer or as complex as very fancy data structure. And "keep track" here means to allocate one object when other part of the app need one instance of object and recyle it for future allocation when one instance of the object is returned .Finally ,let me use c++ to put my problme in a more descriptive way . #define MAX 65535 /* 65535 just indicate that many items should be handled . performance demanding! */ typedef struct { int item ; }Item_t; Item_t items[MAX] ; class itemManager { private : /* up to you.... */ public : int get() ; /* get one index to a free Item_t in items */ bool put(int index) ; /* recyle one Item_t indicate by one index in items */ } how will you implement the two public functions of itemManager ? it's up to you to add any private member .

    Read the article

  • Distributed file systems

    - by Neeraj
    I need to implement a distributed storage system for a set of nodes(devices) connected in a mesh network. So what basically my design goals are: The storage system should be capable of handling dynamic entry and exit of nodes. Replication (for fault tolerance). For this i am thinking of using a Distributed file system. Every node could access data in the other nodes in a transparent manner. Are there some simple, easily pluggable opensource implementations? Thanks for your thoughts!

    Read the article

  • malloc works, cudaHostAlloc segfaults?

    - by Mikhail
    I am new to CUDA and I want to use cudaHostAlloc. I was able to isolate my problem to this following code. Using malloc for host allocation works, using cudaHostAlloc results in a segfault, possibly because the area allocated is invalid? When I dump the pointer in both cases it is not null, so cudaHostAlloc returns something... works in_h = (int*) malloc(length*sizeof(int)); //works for (int i = 0;i<length;i++) in_h[i]=2; doesn't work cudaHostAlloc((void**)&in_h,length*sizeof(int),cudaHostAllocDefault); for (int i = 0;i<length;i++) in_h[i]=2; //segfaults Standalone Code #include <stdio.h> void checkDevice() { cudaDeviceProp info; int deviceName; cudaGetDevice(&deviceName); cudaGetDeviceProperties(&info,deviceName); if (!info.deviceOverlap) { printf("Compute device can't use streams and should be discared."); exit(EXIT_FAILURE); } } int main() { checkDevice(); int *in_h; const int length = 10000; cudaHostAlloc((void**)&in_h,length*sizeof(int),cudaHostAllocDefault); printf("segfault comming %d\n",in_h); for (int i = 0;i<length;i++) { in_h[i]=2; } free(in_h); return EXIT_SUCCESS; } ~ Invocation [id129]$ nvcc fun.cu [id129]$ ./a.out segfault comming 327641824 Segmentation fault (core dumped) Details Program is run in interactive mode on a cluster. I was told that an invocation of the program from the compute node pushes it to the cluster. Have not had any trouble with other home made toy cuda codes.

    Read the article

  • How can I compact the VHD file with Ubuntu?

    - by AmShegar
    I use windows server 2008r2 with role Hyper-V. The guest system is Ubuntu 12.04 LTC. It is situated on the dynamic virtual hard disk. I want to compact this VHD (The real size is 50 GB, 360 GB on the disk). But I can not do this, because the Ubuntu file system is not NTFS. What do I need (gparted, sdelete, ...) for solving this problem? The main problem is that the filesystem is not NTFS, but ext4.

    Read the article

  • Linux C++: Linker is outputting strange errors

    - by knight666
    Alright, here is the output I get: arm-none-linux-gnueabi-ld --entry=main -dynamic-linker=/system/bin/linker -rpath-link=/home/oem/android-ndk-r3/build/platforms/android-5/arch-arm/usr/lib -L/home/oem/android-ndk-r3/build/platforms/android-5/arch-arm/usr/lib -nostdlib -lstdc++ -lm -lGLESv1_CM -rpath=/home/oem/android-ndk-r3/build/platforms/android-5/arch-arm/usr/lib -rpath=../../YoghurtGum/lib/Android -L./lib/Android intermediate/Alien.o intermediate/Bullet.o intermediate/Game.o intermediate/Player.o ../../YoghurtGum/bin/YoghurtGum.a -o bin/Galaxians.android intermediate/Game.o: In function `Galaxians::Init()': /media/YoghurtGum/Tests/Galaxians/src/Game.cpp:45: undefined reference to `__cxa_end_cleanup' /media/YoghurtGum/Tests/Galaxians/src/Game.cpp:44: undefined reference to `__cxa_end_cleanup' intermediate/Game.o:(.ARM.extab+0x18): undefined reference to `__gxx_personality_v0' intermediate/Game.o: In function `Player::Update()': /media/YoghurtGum/Tests/Galaxians/src/Player.h:41: undefined reference to `__cxa_end_cleanup' intermediate/Game.o:(.ARM.extab.text._ZN6Player6UpdateEv[_ZN6Player6UpdateEv]+0x0): undefined reference to `__gxx_personality_v0' intermediate/Game.o:(.rodata._ZTIN10YoghurtGum4GameE[_ZTIN10YoghurtGum4GameE]+0x0): undefined reference to `vtable for __cxxabiv1::__class_type_info' intermediate/Game.o:(.rodata._ZTI6Player[_ZTI6Player]+0x0): undefined reference to `vtable for __cxxabiv1::__si_class_type_info' intermediate/Game.o:(.rodata._ZTIN10YoghurtGum6EntityE[_ZTIN10YoghurtGum6EntityE]+0x0): undefined reference to `vtable for __cxxabiv1::__si_class_type_info' intermediate/Game.o:(.rodata._ZTIN10YoghurtGum6ObjectE[_ZTIN10YoghurtGum6ObjectE]+0x0): undefined reference to `vtable for __cxxabiv1::__class_type_info' intermediate/Game.o:(.rodata._ZTI6Bullet[_ZTI6Bullet]+0x0): undefined reference to `vtable for __cxxabiv1::__si_class_type_info' intermediate/Game.o:(.rodata._ZTI5Alien[_ZTI5Alien]+0x0): undefined reference to `vtable for __cxxabiv1::__si_class_type_info' intermediate/Game.o:(.rodata+0x20): undefined reference to `vtable for __cxxabiv1::__si_class_type_info' ../../YoghurtGum/bin/YoghurtGum.a(Sprite.o):(.rodata._ZTIN10YoghurtGum16SpriteDataOpenGLE[_ZTIN10YoghurtGum16SpriteDataOpenGLE]+0x0): undefined reference to `vtable for __cxxabiv1::__si_class_type_info' ../../YoghurtGum/bin/YoghurtGum.a(Sprite.o):(.rodata._ZTIN10YoghurtGum10SpriteDataE[_ZTIN10YoghurtGum10SpriteDataE]+0x0): undefined reference to `vtable for __cxxabiv1::__class_type_info' make: *** [bin/Galaxians.android] Fout 1 Here's an error I managed to decipher: intermediate/Game.o: In function `Galaxians::Init()': /media/YoghurtGum/Tests/Galaxians/src/Game.cpp:45: undefined reference to `__cxa_end_cleanup' /media/YoghurtGum/Tests/Galaxians/src/Game.cpp:44: undefined reference to `__cxa_end_cleanup' This is line 43 through 45: Assets::AddSprite(new Sprite("media\\ViperMarkII.bmp"), "ship"); Assets::AddSprite(new Sprite("media\\alien.bmp"), "alien"); Assets::AddSprite(new Sprite("media\\bat_ball.bmp"), "bullet"); So, what seems funny to me is that the first new is fine (line 43), but the second one isn't. What could cause this? intermediate/Game.o: In function `Player::Update()': /media/YoghurtGum/Tests/Galaxians/src/Player.h:41: undefined reference to `__cxa_end_cleanup' Another issue with new: Engine::game->scene_current->AddObject(new Bullet(m_X + 10, m_Y)); I have no idea where to begin with the other issues. These are my makefiles, They're a giant mess because I'm just trying to get it to work. Static library: # ====================================== # # # # YoghurtGum static library # # # # ====================================== # include ../YoghurtGum.mk PROGS = bin/YoghurtGum.a SOURCES = $(wildcard src/*.cpp) #$(YG_PATH_LIB)/libGLESv1_CM.so \ #$(YG_PATH_LIB)/libEGL.so \ YG_LINK_OPTIONS = -shared YG_LIBRARIES = \ $(YG_PATH_LIB)/libc.a \ $(YG_PATH_LIB)/libc.so \ $(YG_PATH_LIB)/libstdc++.a \ $(YG_PATH_LIB)/libstdc++.so \ $(YG_PATH_LIB)/libm.a \ $(YG_PATH_LIB)/libm.so \ $(YG_PATH_LIB)/libui.so \ $(YG_PATH_LIB)/liblog.so \ $(YG_PATH_LIB)/libGLESv2.so \ $(YG_PATH_LIB)/libcutils.so \ YG_OBJECTS = $(patsubst src/%.cpp, $(YG_INT)/%.o, $(SOURCES)) YG_NDK_PATH_LIB = /home/oem/android-ndk-r3/build/platforms/android-5/arch-arm/usr/lib all: $(PROGS) rebuild: clean $(PROGS) # remove all .o objects from intermediate and all .android objects from bin clean: rm -f $(YG_INT)/*.o $(YG_BIN)/*.a copy: acpy ../$(PROGS) $(PROGS): $(YG_OBJECTS) $(YG_ARCHIVER) -vq $(PROGS) $(YG_NDK_PATH_LIB)/crtbegin_static.o $(YG_NDK_PATH_LIB)/crtend_android.o $^ && \ $(YG_ARCHIVER) -vr $(PROGS) $(YG_LIBRARIES) $(YG_OBJECTS): $(YG_INT)/%.o : $(YG_SRC)/%.cpp $(YG_COMPILER) $(YG_FLAGS) -I $(GLES_INCLUDES) -c $< -o $@ Test game project: # ====================================== # # # # Galaxians # # # # ====================================== # include ../../YoghurtGum.mk PROGS = bin/Galaxians.android YG_COMPILER = arm-none-linux-gnueabi-g++ YG_LINKER = arm-none-linux-gnueabi-ld YG_PATH_LIB = ./lib/Android YG_LIBRARIES = ../../YoghurtGum/bin/YoghurtGum.a YG_PROGS = bin/Galaxians.android GLES_INCLUDES = ../../YoghurtGum/src ANDROID_NDK_ROOT = /home/oem/android-ndk-r3 NDK_PLATFORM_VER = 5 YG_NDK_PATH_LIB = $(ANDROID_NDK_ROOT)/build/platforms/android-$(NDK_PLATFORM_VER)/arch-arm/usr/lib YG_LIBS = -nostdlib -lstdc++ -lm -lGLESv1_CM #YG_COMPILE_OPTIONS = -g -rdynamic -Wall -Werror -O2 -w YG_COMPILE_OPTIONS = -g -Wall -Werror -O2 -w YG_LINK_OPTIONS = --entry=main -dynamic-linker=/system/bin/linker -rpath-link=$(YG_NDK_PATH_LIB) -L$(YG_NDK_PATH_LIB) $(YG_LIBS) SOURCES = $(wildcard src/*.cpp) YG_OBJECTS = $(patsubst src/%.cpp, intermediate/%.o, $(SOURCES)) all: $(PROGS) rebuild: clean $(PROGS) clean: rm -f intermediate/*.o bin/*.android $(PROGS): $(YG_OBJECTS) $(YG_LINKER) $(YG_LINK_OPTIONS) -rpath=$(YG_NDK_PATH_LIB) -rpath=../../YoghurtGum/lib/Android -L$(YG_PATH_LIB) $^ $(YG_LIBRARIES) -o $@ $(YG_OBJECTS): intermediate/%.o : src/%.cpp $(YG_COMPILER) $(YG_COMPILE_OPTIONS) -I ../../YoghurtGum/src/GLES -I ../../YoghurtGum/src -c $< -o $@ Any help would be appreciated.

    Read the article

  • dhcpd pool exhaustion - What's the result?

    - by jarmund
    I have a DHCP server that serves leases to several houndred, maybe up to a thousand, different clients on an average day. The pool consists of 242 IPs, and due to the highly dynamic nature of the network, it's enough 99% of the time (most devices are gone from the network in a few minutes), despite having a lease time of 3600. Now, imagine if more clients than that connect to the network during an hour. The sollution is obvious: Decrease lease time, or increase the DHCP pool, however, what i would like to know: What happens when dhcpd has exhausted the pool? Are new DHCP requests simply ignored?

    Read the article

  • Problem designing xsd schema - because of a variable element name

    - by ssaboum
    Hi everyone, i'm not the best at creating XSD schema as this is actually my first one, i would like to validate an xml that must look like this : <?xml version="1.0"?> <Data> <FIELD name='toto'> <META mono='false' dynamic='false'> <COLUMN1> <REFTABLE>table</REFTABLE> <REFCOLUMN>key_column</REFCOLUMN> <REFLABELCOLUMN>test_column</REFLABELCOLUMN> </COLUMN1> <COLUMN2> <REFTABLE>table</REFTABLE> <REFCOLUMN>key_column</REFCOLUMN> <REFLABELCOLUMN>test_column</REFLABELCOLUMN> </COLUMN2> </META> <VALUEs> <VALUE>...</VALUE> </VALUEs> </FIELD> My problem is that into the META block the tags "COLUMN1","COLUMN2" are always different, it may become COLUMNxxx. For now my schema is : <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <xsd:element name="Data"> <xsd:complexType> <xsd:sequence> <xsd:element name="FIELD" type="Field" /> </xsd:sequence> <xsd:attribute name="id" type="xsd:int" use="required" /> </xsd:complexType> </xsd:element> <xsd:complexType name="dataSourceDef"> <xsd:sequence> <xsd:element name="DSD_REFTABLE" type="xsd:string" /> <xsd:element name="DSD_REFCOLUMN" type="xsd:string" /> <xsd:element name="DSD_REFLABELCOLUMN" type="xsd:string" /> </xsd:sequence> </xsd:complexType> <xsd:complexType name="MetaTag"> <xsd:sequence> <xsd:any processContents="lax" /> </xsd:sequence> <xsd:attribute name="mono" type="xsd:string" use="required" /> <xsd:attribute name="dynamic" type="xsd:string" use="required"/> </xsd:complexType> <xsd:complexType name="Field"> <xsd:sequence> <xsd:element name="META" type="MetaTag" minOccurs="1" /> <xsd:element name="VALUEs"> <xsd:complexType> <xsd:sequence> <xsd:any processContents="lax" /> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:sequence> <xsd:attribute name="name" type="xsd:string" use="required"/> </xsd:complexType> </xsd:schema> And i just can't get it to work, i don't know how to handle the fact that a precise level of my nodes isn't clear, and the rest is. Would you help me please ? thx

    Read the article

  • Windows redirect traffic to different DNS name not fixed IP address (hosts file equivalent)

    - by Arik Raffael Funke
    Using the Windows hosts file, one can redirect traffic for a domain to a specific IP address, e.g. domainA.com -- 127.0.0.1 I am looking for a SIMPLE way to do the same, but for a target domain name not for a target IP address (as this is dynamic), I.e. domainA.com -- domainB.com Addition: After the getting some initial answers I think I need to concretise my question. Situation: I have an application which looks up the IP of the target domain via DNS and then connects via HTTP to the IP address. I do not have control over any proxy settings. Option 1 Basically I am looking for a way to: intercept DNS requests for a domainA.com launch a DNS request for a domainB.com serve the IP of domainB.com in response to the request for domainA.com Without running an entire DNS server. Option 2 If a DNS server is the only way, in the alternative I would also be happy with an solution to how to define a non-standard DNS-server for a single application. Any ideas for wrapper applications, etc?

    Read the article

  • Windows 2008 IIS 7 PHP Caching / Blank Page Problems?

    - by darkAsPitch
    I don't even know how to explain this. The only thing I can think is 'why am I working with a windows server?' I am renting a dedicated 1and1 server - I installed PHP myself - with fast CGI and caching (pretty sure I checked OK on something about dynamic caching for PHP when I installed it.) Every few hours of intensive php processing - my pages start locking up - usually just showing blank pages - with no errors whatsoever. Just now, I checked a page - let's call it a.php - and it was showing the results of b.php - I thought I had been hacked! Simply restarting the IIS server however, fixes the problem. Any ideas / help / knowledge on similar problems with windows 2008?

    Read the article

  • Why does WebDAV fail from inside home network

    - by Claus
    On my OSX server there is a folder that is configured in the Server App to be accessible via WebDAV. This folder is used to sync OmniFocus. On my router, I have set up a dynamic dns. When I am outside my home network (physically away or when connected via a vpn), I can connect and sync fine via: https://<server name from dyndns>/<username>/<path to WebDAV folder> However, when I am in my home network, the connection to WebDAV does not work (other connections, AFP, f.ex, do work). What could be some reasons why I can't connect to WebDAV from within my home network? What log files could give hints and where are they stored? I am running OSX server 10.9.3. and server.app. Thanks for your help.

    Read the article

  • Haskell math performance

    - by Travis Brown
    I'm in the middle of porting David Blei's original C implementation of Latent Dirichlet Allocation to Haskell, and I'm trying to decide whether to leave some of the low-level stuff in C. The following function is one example—it's an approximation of the second derivative of lgamma: double trigamma(double x) { double p; int i; x=x+6; p=1/(x*x); p=(((((0.075757575757576*p-0.033333333333333)*p+0.0238095238095238) *p-0.033333333333333)*p+0.166666666666667)*p+1)/x+0.5*p; for (i=0; i<6 ;i++) { x=x-1; p=1/(x*x)+p; } return(p); } I've translated this into more or less idiomatic Haskell as follows: trigamma :: Double -> Double trigamma x = snd $ last $ take 7 $ iterate next (x' - 1, p') where x' = x + 6 p = 1 / x' ^ 2 p' = p / 2 + c / x' c = foldr1 (\a b -> (a + b * p)) [1, 1/6, -1/30, 1/42, -1/30, 5/66] next (x, p) = (x - 1, 1 / x ^ 2 + p) The problem is that when I run both through Criterion, my Haskell version is six or seven times slower (I'm compiling with -O2 on GHC 6.12.1). Some similar functions are even worse. I know practically nothing about Haskell performance, and I'm not terribly interested in digging through Core or anything like that, since I can always just call the handful of math-intensive C functions through FFI. But I'm curious about whether there's low-hanging fruit that I'm missing—some kind of extension or library or annotation that I could use to speed up this numeric stuff without making it too ugly.

    Read the article

  • How to make numbered chapter titles and paragraph headers in iWork Pages 09?

    - by dyve
    For most of my document writing I use iWork Pages (from iWork '09), and it's usually fine for me. I don't miss Microsoft Word, except for one simple feature: the ability to number chapter titles and paragraph headers for easy reference in the contents of the document and for cross references. Somehow, I cannot find this feature in Pages '09. It is possible to number headers by setting the style to numbered, but it doesn't mitigate well into the generated dynamic contents, and paragraphs don't follow the numbering of higher level elements it seems. Does anyone know how to make this work?

    Read the article

  • ESXi boot time with 9 iSCSI targets

    - by Myles Gray
    Our ESXi hosts have always been slow booting when it came to iscsi_vmk loaded successfully - sitting here for almost 5 minutes. In all a full server reboot takes almost 12 minutes. We have 9 iSCSI targets per host (5 SANs with redundant interfaces) configured as dynamic discovery targets. Has anyone experienced this? Can it be remedied with static discovery mode? Are there any debug steps we can work through to help diagnose this? (All our targets are accessible at boot so i'm assuming the host isn't stuck retrying to connect to a target)

    Read the article

  • Apache AliasMatch and DirectoryMatch not working?

    - by Alex
    I have the following config - please notice the Alias and Directory equivalent -- uncommented they work as expected but the dynamic/regex based versions don't - any ideas??? <VirtualHost *:80> ServerName temp.dev.local ServerAlias temp.dev.local DocumentRoot "C:\wamp\www\temp\public" <Directory "C:\wamp\www\temp\public"> AllowOverride all Order Allow,Deny Allow from all </Directory> # Alias /private/application/core/page/assets/images/ "C:/wamp/www/temp/private/application/core/page/assets/images/" # <Directory "C:/wamp/www/temp/private/application/core/page/assets/images/"> AliasMatch ^/private/application/(.*)/(.*)/assets/images/ /private/application/$1/$2/assets/images/ <DirectoryMatch "^/private/application/(.*)/(.*)/assets/images/"> Options Indexes FollowSymlinks MultiViews Includes AllowOverride None Order allow,deny Allow from all </DirectoryMatch> </VirtualHost>

    Read the article

  • How to set up JBoss with S3_Ping on AWS?

    - by Jonik
    I'm looking into running clustered JBoss on Amazon Web Services (AWS). I'd like to try out S3_PING, i.e. making JBoss use an S3 bucket for dynamic node discovery etc, since no multicast is available. I found a piece of example config XML related to S3_Ping, but I'm not sure where in JBoss installation you're supposed to configure this. So, what JBoss config files would I need to tweak to get S3_PING working? Can anyone point me to a more complete example? JBoss 5.1.0 GA. (This is probably more a JGroups/JBoss question than anything else. I've already got the S3 bucket for this set up, so no problem there.)

    Read the article

  • Optimizing processing and management of large Java data arrays

    - by mikera
    I'm writing some pretty CPU-intensive, concurrent numerical code that will process large amounts of data stored in Java arrays (e.g. lots of double[100000]s). Some of the algorithms might run millions of times over several days so getting maximum steady-state performance is a high priority. In essence, each algorithm is a Java object that has an method API something like: public double[] runMyAlgorithm(double[] inputData); or alternatively a reference could be passed to the array to store the output data: public runMyAlgorithm(double[] inputData, double[] outputData); Given this requirement, I'm trying to determine the optimal strategy for allocating / managing array space. Frequently the algorithms will need large amounts of temporary storage space. They will also take large arrays as input and create large arrays as output. Among the options I am considering are: Always allocate new arrays as local variables whenever they are needed (e.g. new double[100000]). Probably the simplest approach, but will produce a lot of garbage. Pre-allocate temporary arrays and store them as final fields in the algorithm object - big downside would be that this would mean that only one thread could run the algorithm at any one time. Keep pre-allocated temporary arrays in ThreadLocal storage, so that a thread can use a fixed amount of temporary array space whenever it needs it. ThreadLocal would be required since multiple threads will be running the same algorithm simultaneously. Pass around lots of arrays as parameters (including the temporary arrays for the algorithm to use). Not good since it will make the algorithm API extremely ugly if the caller has to be responsible for providing temporary array space.... Allocate extremely large arrays (e.g. double[10000000]) but also provide the algorithm with offsets into the array so that different threads will use a different area of the array independently. Will obviously require some code to manage the offsets and allocation of the array ranges. Any thoughts on which approach would be best (and why)?

    Read the article

  • Why is this static routing not working ?

    - by geeko
    Greeting gurus, I'm trying to develop a DHCP enforcement extension like Microsoft NAP. My trick to block dynamic-IP requesting machines (that don't meet certain policy) is to strip the default gateway (no default gateway) stated in the IP lease and set the lease subnet mask to 255.255.255.255. Now I need the blocked machines to be able to reach some specific locations (IPs) on the network. To allow for this, I'm including some static routes in the lease. For example, I'm including 10.10.10.11 via router 10.10.10.254 (the one to which the blocked machine that needs to access 10.10.10.11 is connected). Unfortunately, as soon as I set the default gateway to nothing, blocked machines cannot reach any of the added static routes. I also tried classless static routes. Any ideas ? any one knows how MS NAP actually do it ? Geeko

    Read the article

  • nginx is not using gzip to talk to backend servers

    - by Michael Gorsuch
    Our web servers are running IIS 7 and are configured to compress dynamic and static content. When I hit these servers directly, gzip compression works. I recently placed nginx in front of them, and gzip compression has stopped. I was able to work around this by explicitly enabling gzip compression on nginx itself, but that seems a little inefficient considering I have half a dozen backends and only one active nginx box. It appears that nginx is stripping out the Accept-Encoding header. Does anyone have any advice for how to 'correct' this behavior? A sample configuration: upstream backend { server 127.0.0.1:8080; } server { listen 80; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; location / { proxy_pass http://backend; } }

    Read the article

  • Adding operation in middle of complex sequence diagram in visio 2003

    - by James
    I am using Microsoft Visio 2003 to define static classes with operations/methods and a sequence diagrams referring to these classes. The sequence diagram is almost done, but i realized that i missed one operation in middle of the diagram. When i try to move rest of the sequences down by selecting it as a block, all the operations in the block loose link with static diagrams. ( Methods which were referred to static classes as fun(), became fun, which means that now they no longer refer to static diagrams and any future changes would not be reflected in dynamic sequence diagrams automatically.) The sequence diagrams have grown to A3 size paper and i have many of such diagrams which needs correction. Manually moving the operations one by one would involve lots of effort. Could someone kindly suggest a way to overcome this problem?

    Read the article

  • new vhost - main host AWstats

    - by vn
    Hi, I just began working at this new job and I have to config a new host for stats with awstats. I once used awstats on my own server, no biggie. Now, I'm on a multi-sites server with the acces_log files nicely splitted. I copied a awstats.conf file from one of the sites that already has (working) stats. I changed the LogFile and SiteDomain values as mentioned from http://awstats.sourceforge.net/docs/awstats_setup.html#BUILD_UPDATE, saved the conf and ran the commands perl awstats.pl -config=mysite -update and perl awstats.pl -config=mysite -output -staticlinks awstats.mysite.html (yes I changed it with my infos...) PROBLEM IS : whenever I try to access the html file or the dynamic page (with the config option on awstats.pl like my working site does), I get the stats of the MAIN site from access.log itself (and not access_log-mysite) from what it says at the top of the page and from the hostname on the left tab (stats for mysite.com)... what did I do wrong? There's no errors from what I see... Thanks a lot for any help

    Read the article

  • Using FastCGI for PHP on Mac OS X

    - by DanieL
    I have apache2 running on a Mac OS X (10.6) machine, and it is currently serving PHP pages fine, using php5_module but I would like to configure fastcgi_module to handle the php pages. I have tried using the configuration found on www.fastcgi.com but I get the following error: [warn] FastCGI: (dynamic) server "/Path/to/script.php" has failed to remain running for 30 seconds given 3 attempts, its restart interval has been backed off to 600 seconds [warn] FastCGI: server "/usr/bin/php" has failed to remain running for 30 seconds given 3 attempts, its restart interval has been backed off to 600 seconds I'm thinking this is because PHP has not been compiled with FastCGI, but seeing as it came with Mac OS X i'm not sure how to recompile it. Is this the problem? And if so, how do I recompile PHP with FastCGI?

    Read the article

  • Apache+FastCGI Timeout Problem

    - by Sadjad Fouladi
    Hi all. I've recently installed mod_fastcgi and Apache 2.2. I've a simple cgi script as below (test.fcgi): #!/bin/sh echo sadjad But when I invoke "mysite.com/test.fcgi" I see "Internal Server Error" message after a short period of time. The error.log file shows this error message: [Tue Jan 31 22:23:57 2006] [warn] FastCGI: (dynamic) server "~/public_html/oaduluth/dispatch.fcgi" has failed to remain running for 30 seconds given 3 attempts, its restart interval has been backed off to 600 seconds This is my .htaccess file: AddHandler fastcgi-script .fcgi RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ django.fcgi/$1 [QSA,L] I'm very confused, please help me! [Sorry for my poor English!]

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >