Search Results

Search found 46973 results on 1879 pages for 'return path'.

Page 402/1879 | < Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >

  • Sending a android.content.Context parameter to a function with JNI

    - by Ef Es
    I am trying to create a method that checks for internet connection that needs a Context parameter. The JNIHelper allows me to call static functions with parameters, but I don't know how to "retrieve" Cocos2d-x Activity class to use it as a parameter. public static boolean isNetworkAvailable(Context context) { boolean haveConnectedWifi = false; boolean haveConnectedMobile = false; ConnectivityManager cm = (ConnectivityManager) context.getSystemService( Context.CONNECTIVITY_SERVICE); NetworkInfo[] netInfo = cm.getAllNetworkInfo(); for (NetworkInfo ni : netInfo) { if (ni.getTypeName().equalsIgnoreCase("WIFI")) if (ni.isConnected()) haveConnectedWifi = true; if (ni.getTypeName().equalsIgnoreCase("MOBILE")) if (ni.isConnected()) haveConnectedMobile = true; } return haveConnectedWifi || haveConnectedMobile; } and the c++ code is JniMethodInfo methodInfo; if ( !JniHelper::getStaticMethodInfo( methodInfo, "my/app/TestApp", "isNetworkAvailable", "(android/content/Context;)V")) { //error return; } CCLog( "Method found and loaded!"); methodInfo.env->CallStaticVoidMethod( methodInfo.classID, methodInfo.methodID); methodInfo.env->DeleteLocalRef( methodInfo.classID);

    Read the article

  • Extending Currying: Partial Functions in Javascript

    - by kerry
    Last week I posted about function currying in javascript.  This week I am taking it a step further by adding the ability to call partial functions. Suppose we have a graphing application that will pull data via Ajax and perform some calculation to update a graph.  Using a method with the signature ‘updateGraph(id,value)’. To do this, we have do something like this: 1: for(var i=0;i<objects.length;i++) { 2: Ajax.request('/some/data',{id:objects[i].id},function(json) { 3: updateGraph(json.id, json.value); 4: } 5: } This works fine.  But, using this method we need to return the id in the json response from the server.  This works fine, but is not that elegant and increase network traffic. Using partial function currying we can bind the id parameter and add the second parameter later (when returning from the asynchronous call).  To do this, we will need the updated curry method.  I have added support for sending additional parameters at runtime for curried methods. 1: Function.prototype.curry = function(scope) { 2: scope = scope || window 3: var args = []; 4: for (var i=1, len = arguments.length; i < len; ++i) { 5: args.push(arguments[i]); 6: } 7: var m = this; 8: return function() { 9: for (var i=0, len = arguments.length; i < len; ++i) { 10: args.push(arguments[i]); 11: } 12: return m.apply(scope, args); 13: }; 14: } To partially curry this method we will call the curry method with the id parameter, then the request will callback on it with just the value.  Any additional parameters are appended to the method call. 1: for(var i=0;i<objects.length;i++) { 2: var id=objects[i].id; 3: Ajax.request('/some/data',{id: id}, updateGraph.curry(id)); 4: } As you can see, partial currying gives is a very useful tool and this simple method should be a part of every developer’s toolbox.

    Read the article

  • IIS in VirtualBox serving files from shared Ubuntu folder

    - by queen3
    Here's the problem: I want to use Ubuntu. But I need to develop ASP.NET (MVC) sites. So I setup VirtualBox with Win2003 and IIS6. But I would prefer my working files to be located in my Ubuntu home folder. So I setup shared folder in VirtualBox and make IIS6 virtual directory work from there. The problem is, IIS6 can't do this. Whatever I try (mapped drive, network uri path) I get different IIS errors: can't access folder (for mapped drive), can't monitor file system changes (\vboxsvr share path), and so on. Is there a way for IIS6 in virtual machine to configure virtual application folder to be on host machine (Ubuntu) - be it shared folder, mapped drive, smb share, or whatever?

    Read the article

  • Hardlink files not the same

    - by SabreWolfy
    I created a hardlink of a file as follows: ln /path/to/source/file1 /path/to/target/file2 Using md5sum, the two files are identical. After a while, the source file has been modified by another program. The target file does not get "updated". The md5sums are now different. The files are on the same partition of course, otherwise I could not create a link. What I'm trying to do is get a copy of the source file into the target folder (which is versioned), so that I have access to the source file elsewhere. I tried moving the source file to the target folder with a different name and then creating a symlink to it at the source, but the program expecting the file then (somehow) created a file of the name it wanted in the target folder. Ideas?

    Read the article

  • Powershell: Install-dotNET4 function

    - by marc dekeyser
    This function will download and install ,NET 4.0. It uses the Get-Framework-Versions function to determine if the installation is necessary or not. Internet Connectivity will be required as the script auto downloads the setup file (and sleeps for 360 seconds... I had a function in there to monitor for install completion at first, turns out the setup file spawns so many childprocesses the function just got confused and locked up -_-)Alternatively you could drop the installation file in the folder specified on the $folderPath variable too. That will skip the download and use the file. This function easily adapts in to other versions f.e. I use it for Powershell 3 installs as well!Function install-dotNet4 () {    if(($InstalledDotNET -eq "4.0") -or ($InstalledDotNET -eq "4.0c")){        write-host ".NET 4.0 Framework is already installed" -foregroundcolor Green    } else{            #set a var for the folder you are looking for        $folderPath = 'C:\Temp'        #Check if folder exists, if not, create it        if (Test-Path $folderpath){            Write-Host "The folder $folderPath exists." -ForeGroundColor Green        } else{            Write-Host "The folder $folderPath does not exist, creating..." -NoNewline -ForegroundColor Red            New-Item $folderpath -type directory | Out-Null            Write-Host " - done!" -ForegroundColor Green        }        # Check if file exists, if not, download it        $file = $folderPath+"\dotNetFx40_Full_x86_x64.exe"        if (Test-Path $file){            write-host "The file $file exists." -ForeGroundColor Green        } else {            #Download Microsoft .Net 4.0 Framework            Write-Host "Downloading Microsoft .Net 4.0 Framework..." -nonewline -ForeGroundColor DarkYellow            $clnt = New-Object System.Net.WebClient            $url = "http://download.microsoft.com/download/9/5/A/95A9616B-7A37-4AF6-BC36-D6EA96C8DAAE/dotNetFx40_Full_x86_x64.exe"            $clnt.DownloadFile($url,$file)            Write-Host " - done!" -ForegroundColor Green        }        #Install Microsoft .Net Framework        Write-Host "Installing Microsoft .Net Framework..." -nonewline -ForegroundColor DarkYellow        $dotNET4 = $folderPath+"\dotNetFx40_Full_x86_x64.exe /quiet /norestart"        Invoke-Expression $dotNET4        write-host " - done!" -ForegroundColor Green        start-sleep -seconds 360    }}

    Read the article

  • nginx+php-fpm help optimize configs

    - by Dmitro
    I have 3 servers. First server (CPU - model name: 06/17, 2.66GHz, 4 cores, 8GB RAM) have nginx as load balancer with next config upstream lb_mydomain { server mydomain.ru:81 weight=2; server 66.0.0.18 weight=6; } server { listen 80; server_name ~(?!mydomain.ru)(.*); client_max_body_size 20m; location / { proxy_pass http://lb_mydomain; proxy_redirect off; proxy_set_header Connection close; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass_header Set-Cookie; proxy_pass_header P3P; proxy_pass_header Content-Type; proxy_pass_header Content-Disposition; proxy_pass_header Content-Length; } } And configs from nginx.conf: user www-data; worker_processes 5; # worker_priority -1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 5024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; default_type application/octet-stream; #tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; # PHP-FPM (backend) upstream php-fpm { server 127.0.0.1:9000; } include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } And config php-fpm: listen = 127.0.0.1:9000 ;listen.backlog = -1 ;listen.allowed_clients = 127.0.0.1 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 user = www-data group = www-data pm = dynamic pm.max_children = 80 ;pm.start_servers = 20 pm.min_spare_servers = 5 pm.max_spare_servers = 35 ;pm.max_requests = 500 pm.status_path = /status ping.path = /ping ;ping.response = pong request_terminate_timeout = 30s request_slowlog_timeout = 10s slowlog = /var/log/php-fpm.log.slow ;rlimit_files = 1024 ;rlimit_core = 0 ;chroot = chdir = /var/www ;catch_workers_output = yes ;env[HOSTNAME] = $HOSTNAME ;env[PATH] = /usr/local/bin:/usr/bin:/bin ;env[TMP] = /tmp ;env[TMPDIR] = /tmp ;env[TEMP] = /tmp ;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f [email protected] ;php_flag[display_errors] = off ;php_admin_value[error_log] = /var/log/fpm-php.www.log ;php_admin_flag[log_errors] = on ;php_admin_value[memory_limit] = 32M In top I see 20 php-fpm processes which use from 1% - 15% CPU. So it's have high load averadge: top - 15:36:22 up 34 days, 20:54, 1 user, load average: 5.98, 7.75, 8.78 Tasks: 218 total, 1 running, 217 sleeping, 0 stopped, 0 zombie Cpu(s): 34.1%us, 3.2%sy, 0.0%ni, 37.0%id, 24.8%wa, 0.0%hi, 0.9%si, 0.0%st Mem: 8183228k total, 7538584k used, 644644k free, 351136k buffers Swap: 9936892k total, 14636k used, 9922256k free, 990540k cached Second server(CPU - model name: Intel(R) Xeon(R) CPU E5504 @ 2.00GHz, 8 cores, 8GB RAM). Nginx configs from nginx.conf: user www-data; worker_processes 5; # worker_priority -1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 5024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; default_type application/octet-stream; #tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; # PHP-FPM (backend) upstream php-fpm { server 127.0.0.1:9000; } include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } And config of php-fpm: listen = 127.0.0.1:9000 ;listen.backlog = -1 ;listen.allowed_clients = 127.0.0.1 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 user = www-data group = www-data pm = dynamic pm.max_children = 50 ;pm.start_servers = 20 pm.min_spare_servers = 5 pm.max_spare_servers = 35 ;pm.max_requests = 500 ;pm.status_path = /status ;ping.path = /ping ;ping.response = pong ;request_terminate_timeout = 0 ;request_slowlog_timeout = 0 ;slowlog = /var/log/php-fpm.log.slow ;rlimit_files = 1024 ;rlimit_core = 0 ;chroot = chdir = /var/www ;catch_workers_output = yes ;env[HOSTNAME] = $HOSTNAME ;env[PATH] = /usr/local/bin:/usr/bin:/bin ;env[TMP] = /tmp ;env[TMPDIR] = /tmp ;env[TEMP] = /tmp ;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f [email protected] ;php_flag[display_errors] = off ;php_admin_value[error_log] = /var/log/fpm-php.www.log ;php_admin_flag[log_errors] = on ;php_admin_value[memory_limit] = 32M In top I see 50 php-fpm processes which use from 10% - 25% CPU. So it's have high load averadge: top - 15:53:05 up 33 days, 1:15, 1 user, load average: 41.35, 40.28, 39.61 Tasks: 239 total, 40 running, 199 sleeping, 0 stopped, 0 zombie Cpu(s): 96.5%us, 3.1%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.4%si, 0.0%st Mem: 8185560k total, 7804224k used, 381336k free, 161648k buffers Swap: 19802108k total, 16k used, 19802092k free, 5068112k cached Third server is server with database postgresql. Also i try ab -n 50 -c 5 http://www.mydomain.ru/ And I get next info: Complete requests: 50 Failed requests: 48 (Connect: 0, Receive: 0, Length: 48, Exceptions: 0) Write errors: 0 Total transferred: 9271367 bytes HTML transferred: 9247767 bytes Requests per second: 1.02 [#/sec] (mean) Time per request: 4882.427 [ms] (mean) Time per request: 976.486 [ms] (mean, across all concurrent requests) Transfer rate: 185.44 [Kbytes/sec] received Please advise how can I make lower level of load average?

    Read the article

  • OBIEE 11.1.1 - How to enable HTTP compression and caching in Oracle iPlanet Web Server

    - by Ahmed Awan
    1. To implement HTTP compression / caching, install and configure Oracle iPlanet Web Server 7.0.x for the bi_serverN Managed Servers (refer to document http://docs.oracle.com/cd/E23943_01/web.1111/e16435/iplanet.htm) 2. On the Oracle iPlanet Web Server machine, open the file Administrator's Configuration (obj.conf) for editing. (Guidelines for modifying the obj.conf file is available at http://download.oracle.com/docs/cd/E19146-01/821-1827/821-1827.pdf) 3. Add the following lines in obj.conf file inside <Object name="default"> . </Object> and restart the Oracle iPlanet Web Server machine: #HTTP Caching <If $path =~ '^(.*)\.(jpg|jpeg|gif|png|css|js)$'> ObjectType fn="set-variable" insert-srvhdrs="Expires:$(httpdate($time + 864000))" </If>   <If $path =~ '^(.*)\.(jpg|jpeg|gif|png|css|js)$'> PathCheck fn="set-cache-control" control="public,max-age=864000" </If>   #HTTP Compression   Output fn="insert-filter" filter="http-compression" vary="false" compression-level="9" fragment_size="8096"

    Read the article

  • InvokeRequired not reliable?

    - by marocanu2001
    InvalidOperationException: Cross-thread operation not valid: Control 'progressBar' accessed from a thread other than the thread it was created ... Now this is not a nice way to start a day! not even if it's a Monday! So you have seen this and already thought, come on, this is an old one, just ask whether InvokeRequired before calling the method in the control, something like this: if (progressBar.InvokeRequired) {           progressBar.Invoke  ( new MethodInvoker( delegate {  progressBar.Value = 50;    }) ) }else      progressBar.Value = 50;    Unfortunately this was not working the way I would have expected, I got this error, debugged and though in debugging the InvokeRequired had become true , the error was thrown on the branch that did not required Invoke. So there was a scenario where InvokeRequired returned false and still accessing the control was not on the right thread ... Problem was that I kept showing and hiding the little form showing the progressbar. The progressbar was updating on an event  , ProgressChanged and I could not guarantee the little form was loaded by the time the event was thrown. So , spotted the problem, if none of the parents of the control you try to modify is created at the time of the method invoking, the InvokeRequired returns true! That causes your code to execute on the woring thread. Of course, updating UI before the win dow was created is not a legitimate action either, still I would have expected a different error. MSDN: "If the control's handle does not yet exist, InvokeRequired searches up the control's parent chain until it finds a control or form that does have a window handle. If no appropriate handle can be found, the InvokeRequired method returns false. This means that InvokeRequired can return false if Invoke is not required (the call occurs on the same thread), or if the control was created on a different thread but the control's handle has not yet been created." Have  a look at InvokeRequired's implementation: public bool InvokeRequired {     get     {         HandleRef hWnd;         int lpdwProcessId;         if (this.IsHandleCreated)         {             hWnd = new HandleRef(this, this.Handle);         }         else         {             Control wrapper = this.FindMarshallingControl();             if (!wrapper.IsHandleCreated)             {                 return false; // <==========             }             hWnd = new HandleRef(wrapper, wrapper.Handle);         }         int windowThreadProcessId = SafeNativeMethods.GetWindowThreadProcessId(hWnd, out lpdwProcessId);         int currentThreadId = SafeNativeMethods.GetCurrentThreadId();         return (windowThreadProcessId != currentThreadId);     } } Here 's a good article about this and a workaround http://www.ikriv.com/en/prog/info/dotnet/MysteriousHang.html

    Read the article

  • SSDT gotcha – Moving a file erases code analysis suppressions

    - by jamiet
    I discovered a little wrinkle in SSDT today that is worth knowing about if you are managing your database schemas using SSDT. In short, if a file is moved to a different folder in the project then any code analysis suppressions that reference that file will disappear from the suppression file. This makes sense if you think about it because the paths stored in the suppression file are no longer valid, but you probably won’t be aware of it until it happens to you. If you don’t know what code analysis is or you don’t know what the suppression file is then you can probably stop reading now, otherwise read on for a simple short demo. Let’s create a new project and add a stored procedure to it called sp_dummy. Naming stored procedures with a sp_ prefix is generally frowned upon and hence SSDT static code analysis will look for occurrences of this and flag them. So, the next thing we need to do is turn on static code analysis in the project properties: A subsequent build causes a code analysis warning as we might expect: Let’s suppose we actually don’t mind stored procedures with sp_ prefixes, we can just right-click on the message to suppress and get rid of it: That causes a suppression file to get created in our project: Notice that the suppression file contains a relative path to the file that has had the suppression placed upon it. Now if we simply move the file within our project to a new folder notice that the suppression that we just created gets removed from the suppression file: As I alluded above this behaviour is intuitive because the path originally stored in the suppression file is no longer relevant but you’re probably not going to be aware of it until it happens to you and messages that you thought you had suppressed start appearing again. Definitely one to be aware of. @Jamiet   

    Read the article

  • Howto fix "[Errno 13] Permission denied" in mailman mailing lists

    - by Michael
    After migrating domains from one plesk server onto another, I got several of those mails every day: (the target mailbox does not exist, so I get those as undeliverable mail bounces) Return-Path: <[email protected]> Received: (qmail 26460 invoked by uid 38); 26 May 2012 12:00:02 +0200 Date: 26 May 2012 12:00:02 +0200 Message-ID: <20120526100002.xyzxx.qmail@lvpsxxx-xx-xx-xx.dedicated.hosteurope.de> From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <list@lvpsxxx-xx-xx-xx> [ -x /usr/lib/mailman/cron/senddigests ] && /usr/lib/mailman/cron/senddigests Content-Type: text/plain; charset=ANSI_X3.4-1968 X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/var/list> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=list> List: xyzxyz: problem processing /var/lib/mailman/lists/xyzxyz/digest.mbox: [Errno 13] Permission denied: '/var/lib/mailman/archives/private/xyzxyz' I tried to fix the permissions myself, but the problem still exists.

    Read the article

  • optimizing file share performance on Win2k8?

    - by Kirk Marple
    We have a case where we're accessing a RAID array (drive E:) on a Windows Server 2008 SP2 x86 box. (Recently installed, nothing other than SQL Server 2005 on the server.) In one scenario, when directly accessing it (E:\folder\file.xxx) we get 45MBps throughput to a video file. If we access the same file on the same array, but through UNC path (\server\folder\file.xxx) we get about 23MBps throughput with the exact same test. Obviously the second test is going through more layers of the stack, but that's a major performance hit. What tuning should we be looking at for making the UNC path be closer in performance to the direct access case? Thanks, Kirk (corrected: it is CIFS not SMB, but generalized title to 'file share'.) (additional info: this happens during the read from a single file, not an issue across multiple connections. the file is on the local machine, but exposed via file share. so client and file server are both same Windows 2008 server.)

    Read the article

  • Nginx reverse proxy + URL rewrite

    - by jeffreyveon
    Nginx is running on port 80, and I'm using it to reverse proxy URLs with path /foo to port 3200 this way: location /foo { proxy_pass http://localhost:3200; proxy_redirect off; proxy_set_header Host $host; } This works fine, but I have an application on port 3200, for which I don't want the initial /foo to be sent to. That is - when I access http://localhost/foo/bar, I want only /bar to be the path as received by the app. So I tried adding this line: rewrite ^(.*)foo(.*)$ http://localhost:3200/$2 permanent; This causes 302 redirect (change in URL), but I want 301. What should I do?

    Read the article

  • What do I need for SSL?

    - by Ency
    Hi guys, just a quick question, I'm kind of confused. I've had set-up my own certification authority and I can create requests and signed them. But, I'm not sure, what I need to give to Apache, currently I've got: CA Private key CA Certificate Website Private key Website Certificate Website Certificate Request (I think I do not need it, but just to be clear) Until today I was using snakeoil certificate, but I've decided to have more SSL services, than CA looks as good solution, so my Apache was configured well, but now I am not sure what I shall provide to apache in following rules: SSLCertificateKeyFile /path/to/Website Private Key SSLCertificateFile /path/to/CA Certificate But than I got [Mon Dec 27 12:09:33 2010] [warn] RSA server certificate CommonName (CN) `EServer' does NOT match server name!? [Mon Dec 27 12:09:33 2010] [error] Unable to configure RSA server private key [Mon Dec 27 12:09:33 2010] [error] SSL Library Error: 185073780 error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch Something tells me than the warning is quite weird, because "EServer" is a common name of CA, so I think I shall not use CA Certificate in SSLCertificateFile, shall I? Do I need to create Certificate from Website private key or something else?

    Read the article

  • correct Installation and configuration of openJDK and R

    - by Marco K
    I am relatively new to Ubuntu so I wont know a lot of commands that probably became standard to a lot of you guys. I am trying to set up R and with it the necessary java dependencies to install e.g. JGR, rjava, etc. I read through quite a few instructions to do that but somehow I must have done sth wrong. Here is the state of R and java: R --version R version 2.14.1 (2011-12-22) Copyright (C) 2011 The R Foundation for Statistical Computing ISBN 3-900051-07-0 Platform: x86_64-pc-linux-gnu (64-bit) java -version java version "1.6.0_23" OpenJDK Runtime Environment (IcedTea6 1.11pre) (6b23~pre11-0ubuntu1.11.10.1) OpenJDK 64-Bit Server VM (build 20.0-b11, mixed mode) R CMD javareconf Java interpreter : /usr/bin/java Java version : 1.6.0_23 Java home path : /usr/lib/jvm/java-6-openjdk/jre Java compiler : /usr/bin/javac Java headers gen.: /usr/bin/javah Java archive tool: /usr/bin/jar Java library path: /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-6-openjdk/jre/lib/amd64:/usr/lib/jvm/java-6-openjdk/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib/jni:/lib:/usr/lib JNI linker flags : -L/usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server -L/usr/lib/jvm/java-6-openjdk/jre/lib/amd64 -L/usr/lib/jvm/java-6-openjdk/jre/../lib/amd64 -L/usr/java/packages/lib/amd64 -L/usr/lib/jni -L/lib -L/usr/lib -ljvm JNI cpp flags : But when I try to install 'JavaGD' in R, which is a dependency for JGR I get: ... checking Java support in R... present: interpreter : '/usr/bin/java' cpp flags : '' java libs : '-L/usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server -L/usr/lib/jvm/java-6-openjdk/jre/lib/amd64 -L/usr/lib/jvm/java-6-openjdk/jre/../lib/amd64 -L/usr/java/packages/lib/amd64 -L/usr/lib/jni -L/lib -L/usr/lib -ljvm' configure: error: One or more Java configuration variables are not set. Make sure R is configured with full Java support (including JDK). Run R CMD javareconf as root to add Java support to R. ... Any help would be greatly appreciated. Thanks!

    Read the article

  • Workaround: build FBX in XNA raise OutOfMemoryException

    - by Vitus
    If you try to add large FBX 3D model to the XNA project, and build it, you can get an OutOfMemoryException build error like following: Error    1    Building content threw OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.    at System.Collections.Generic.List`1.set_Capacity(Int32 value)    at System.Collections.Generic.List`1.EnsureCapacity(Int32 min)    at System.Collections.Generic.List`1.InsertRange(Int32 index, IEnumerable`1 collection)    at Microsoft.Xna.Framework.Content.Pipeline.Graphics.VertexChannel`1.InsertRange(Int32 index, Int32 count)    at Microsoft.Xna.Framework.Content.Pipeline.Graphics.VertexContent.InsertRange(Int32 index, IEnumerable`1 positionIndexCollection)    at Microsoft.Xna.Framework.Content.Pipeline.Graphics.MeshBuilder.AddTriangleVertex(Int32 indexIntoVertexCollection)    at Microsoft.Xna.Framework.Content.Pipeline.MeshConverter.FillNodeWithInfoFromMesh(KFbxNode* fbxNode, String name, KFbxGeometryConverter* geometryConverter)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.ProcessInformationInNode(KFbxNode* fbxNode, String name, Boolean* partOfMainSkeleton, Boolean* warnIfBoneButNotChild)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.ProcessNode(ValueType parentAbsoluteTransform, NodeContent potentialParent, KFbxNode* fbxNode, Boolean partOfMainSkeleton, Boolean warnIfBoneButNotChild)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.ProcessNode(ValueType parentAbsoluteTransform, NodeContent potentialParent, KFbxNode* fbxNode, Boolean partOfMainSkeleton, Boolean warnIfBoneButNotChild)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.Import(String filename, ContentImporterContext context)    at Microsoft.Xna.Framework.Content.Pipeline.ContentImporter`1.Microsoft.Xna.Framework.Content.Pipeline.IContentImporter.Import(String filename, ContentImporterContext context)    //additional calls here …   My desktop PC have 8Gb RAM, and Visual Studio’s process devenv.exe use under 2Gb of it while build process (about 3.5-4Gb of RAM is always free). It’s obvious, that VS can’t address more than 2Gb of RAM, and when that limit is over, build process is fail. OS on my PC is Win x64,  so I “charge” devenv.exe by using editbin.exe utility – in the VS Command prompt I run following: editbin "C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\devenv.exe" /LARGEADDRESSAWARE This command edits the image to indicate that the application can handle addresses larger than 2 gigabytes. After that FBX file successfully built! Of course, you must put proper path to devenv.exe, depend on your installation path. If you are on Win x86, you need to do additional action – more info here.   P.S.: although now you can build a bigger files, than usual, keep in mind, that XNA have some restrictions on vertex buffer size etc., depend on your current XNA project profile (Reach or HiDef). And if your model’s vertexbuffer size more than 64Mb (with Reach profile), that model can’t be built and raise an error.

    Read the article

  • CreateRenderTarget returns 0x80070057 in big surface resolution

    - by senggen
    I have created the SLI merged desktop of three 1920x1680 monitors, so the desktop resolution is 5760x1080. There is a 0x80070057 error, while calling CreateRenderTarget to create the RT_Surface: IDirect3DSurface9* _render_surface; HRESULT hr = _device->CreateRenderTarget( _desktop_width * 2, _desktop_height + 1, D3DFMT_A8R8G8B8, D3DMULTISAMPLE_NONE, 0, TRUE, &_render_surface, NULL); It works OK with desktop resolution 1024x768, and the total resolution is 3072x768. In http://msdn.microsoft.com/en-us/library/windows/desktop/bb174361(v=vs.85).aspx, it says If the method succeeds, the return value is D3D_OK. If the method fails, the return value can be one of the following: D3DERR_NOTAVAILABLE, D3DERR_INVALIDCALL, D3DERR_OUTOFVIDEOMEMORY, E_OUTOFMEMORY. and no description about 0x80070057. HRESULT: 0x80070057 (2147942487) Name: E_INVALIDARG Description: An invalid parameter was passed to the returning function Somebody please help me.

    Read the article

  • IIS - HTTP Redirect all requests for one virtual directory to another

    - by nekno
    How do I set up an HTTP Redirect rule to redirect all requests for a virtual directory to another virtual directory, when I don't know the hostname or complete URL, and cannot use the URL Rewrite module? The following redirects should work: http://host1/app/oldvdir -> http://host1/app/newvdir http://host1/app/oldvdir/ -> http://host1/app/newvdir/ http://host1/app/oldvdir/login.aspx -> http://host1/app/newvdir/login.aspx http://host2/app/oldvdir/login.aspx -> http://host2/app/newvdir/login.aspx I would like to place the redirect rule in the app's root web.config. I have attempted the following rules, but the end result is simply that the redirected vdir gets duplicated on the end of the original vdir until reaching the max URL length, e.g., http://host/oldvdir/login.aspx -> http://host/oldvdir/newvdir/newvdir/newvdir/... Rules in root web.config (I also have tried all sorts of combinations of settings with and without leading and trailing slashes, etc): <location path="oldvdir"> <system.webServer> <httpRedirect enabled="true" exactDestination="false" httpResponseStatus="Permanent"> <add wildcard="*/oldvdir/*" destination="/newvdir/"/> </httpRedirect> </system.webServer> </location> <location path="oldvdir/"> <system.webServer> <httpRedirect enabled="true" exactDestination="false" destination="/newvdir" httpResponseStatus="Permanent"/> </system.webServer> </location>

    Read the article

  • Should my dropdown of recently used items show items I no longer have access to

    - by Dan Hibbert
    We are implementing a client for our document management system. Part of this is the checkin screen where one of the fields a user chooses is the folder where the document should be checked into. In our original system, this was represented with a combobox where a user could hand type a folder path or select a path from a list of 5 folders they'd recently used for checking. It is possible that between the time they used the folder and the time they are doing the new checkin the user will no longer have access to the folder. At present, we still show the folder as an option and then, if the user chooses that folder, display an error message when the user submits the check in. We are thinking of removing these recently used folders the user doesn't have access to (we'll make a check when the form is instantiated) because why show an option if we know it will cause a failure (and another dialog message the user has to OK). However, an opposite opinion is that if we remove those folders, the users will think the system has "forgotten" their recent choices and will lose trust in what they are using. I'd like to get some opinions on the better user experience for this problem.

    Read the article

  • Using OData to get Mix10 files

    - by Jon Dalberg
    There has been a lot of talk around OData lately (go to odata.org for more information) and I wanted to get all the videos from Mix ‘10: two great tastes that taste great together. Luckily, Mix has exposed the ‘10 sessions via OData at http://api.visitmix.com/OData.svc, now all I have to do is slap together a bit of code to fetch the videos. Step 1 (cut a hole in the box) Create a new console application and add a new service reference. Step 2 (put your junk in the box) Write a smidgen of code: 1: static void Main(string[] args) 2: { 3: var mix = new Mix.EventEntities(new Uri("http://api.visitmix.com/OData.svc")); 4:   5: var files = from f in mix.Files 6: where f.TypeName == "WMV" 7: select f; 8:   9: var web = new WebClient(); 10: 11: var myVideos = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.MyVideos), "Mix10"); 12:   13: Directory.CreateDirectory(myVideos); 14:   15: files.ToList().ForEach(f => { 16: var fileName = new Uri(f.Url).Segments.Last(); 17: Console.WriteLine(f.Url); 18: web.DownloadFile(f.Url, Path.Combine(myVideos, fileName)); 19: }); 20: } Step 3 (have her open the box) Compile and run. As you can see, the client reference created for the OData service handles almost everything for me. Yeah, I know there is some batch file to download the files, but it relies on cUrl being on the machine – and I wanted an excuse to work with an OData service. Enjoy!

    Read the article

  • Design of input files reading when it comes to defaults/transformations

    - by Stefano Borini
    Suppose you have an application that reads an input file, on a language that does not support the concept of None. The input is read, parsed, and the contents are stored on a structure for later use. Now, in general you want to keep into account transformation of the data from the input, such as adding default values when not specified, or adding full path information to relative path specified in the input. There are two different strategies to achieve this. The first strategy is to perform these transformations at input file reading time. In practice, you put all the intelligence into the input parser, and your application has no logic to deal with unexpected circumstances, such as an unspecified value. You lose the information of what was specified and what wasn't, but you gain in black-boxing the details. Your "running code" needs that information in any case and in a proper form, and is not concerned if it's the default or a user-specified information. The second strategy is to have the file reader a real one-to-one mapper from the file to a memory-stored object, with no intelligent behavior. unspecified values are not filled (which may however be a problem in languages not supporting None) and data is stored verbatim from the file. The intelligence for recovery must now go into the "running code", which must check what was specified in the file, eventually fall back to a default, or modify the input properly before using it. I would like to know your opinion on these two approaches, and in particular which one you found the most frequently implemented.

    Read the article

  • When browsing a specific network share remotely, Windows Explorer continuously jumps back to the parent folder

    - by Evil Pigeon
    I am accessing a specific network path on another domain. It looks something like this: \\CoputerName.OtherDomain.in\c$\Inetpub\Testing\Website\ In under 30 seconds, I am automatically jumped back to \\CoputerName.OtherDomain.in\c$\Inetpub\Testing\ In less time it then jumps back to \\CoputerName.OtherDomain.in\c$\Inetpub\ Then it jumps back to c$ for its final resting place. \\CoputerName.OtherDomain.in\c$\ At first I thought this had to with a faulty keyboard, but this behaviour also occurs when the window does not have focus. It's as if windows thinks that the folder no longer exists (as in someone else has deleted or moved it). This behaviour is not specific to my PC either, it occurs from other machines in the office. Edit: It looks like this issue only occurs from other Windows 7 machines. There are no issues accessing the path from XP.

    Read the article

  • Code review recommendations and Code Smells

    - by Michael Freidgeim
    Some time ago Twitter told that I am similar to Boris Lipschitz . Indeed he is also .Net programmer from Russia living in Australia. I‘ve read his list of Code Review points and found them quite comprehensive. A few points  were not clear for me, and it forced me for a further reading.In particular the statement “Exception should not be used to return a status or an error code.” wasn’t fully clear for me, because sometimes we store an exception as an object with all error details and I believe it’s a valid approach. However I agree that throwing exceptions should be avoided, if you expect to return error as a part of a normal flow. Related link: http://codeutopia.net/blog/2010/03/11/should-a-failed-function-return-a-value-or-throw-an-exception/ Another point slightly puzzled me“If Thread.Sleep() is used, can it be replaced with something else, ei Timer, AutoResetEvent, etc” . I believe, that there are very rare cases, when anyone using Thread.Sleep in any production code. Usually it is used in mocks and prototypes.I had to look further to clarify “Dependency injection is used instead of Service Location pattern”.Even most of articles has some preferences to Dependency injection, there are also advantages to use Service Location. E.g see http://geekswithblogs.net/KyleBurns/archive/2012/04/27/dependency-injection-vs.-service-locator.aspx. http://www.cookcomputing.com/blog/archives/000587.html  refers to Concluding Thoughts of Martin Fowler The choice between Service Locator and Dependency Injection is less important than the principle of separating service configuration from the use of services within an applicationThe post had a link to excellent article Code Smells of Jeff Atwood, but the statement, that “code should not pass a review if it violates any of the  code smells” sound too strict for my environment. In particular, I disagree with “Dead Code” recommendation “Ruthlessly delete code that isn't being used. That's why we have source control systems!”. If there is a chance that not used code will be required in a future, it is convenient to keep it as commented or #if/#endif blocks with appropriate explanation, why it could be required in the future. TFS is a good source control system, but context search in source code of current solution is much easier than finding something in the previous versions of the code.I also found a link to a good book “Clean Code.A.Handbook.of.Agile.Software”

    Read the article

  • updating drive mapping GPO programmatically using powershell

    - by Kristoffer
    I have a Group Policy in a domain that have lots of drive mapping settings. I would like to change the path for a lot of these servers in this gpo with powershell if possible. I know i could do this via the GPMC, but would prefer to do it programtically. I have looked at the grouppolicy powershell module from microsoft (get-gpo and friends) but i only seem to be able to change registry entrys and permissions on the policys, not the actual path for the drivemapping. any ideas? Thanks!

    Read the article

  • bash script - spawn, send, interact - commands not found error

    - by Sandeepan Nath
    I my shell script, I am trying to remove password prompt for scp command (as given in http://stackoverflow.com/questions/459182/using-expect-to-pass-a-password-to-ssh/459225#459225) and this is what I have so far :- #!/usr/bin/expect spawn scp $DESTINATION_PATH/exam.tar $SSH_CREDENTIALS':/'$PROJECT_INSTALLATION_PATH expect "password:" send $sshPassword"\n"; interact On running the script, I am getting errors spawn: command not found send: command not found interact: command not found I was also getting error expect: command not found also, then I realised the path to expect was not correct and expect was not installed at all. So, I did yum install expect, corrected the path and the error was gone. But not able to remove the other 3 errors still.

    Read the article

  • Sharing an external hard drive in Ubuntu using Samba

    - by cambraca
    /media/MYDISK is where my hard drive is mounted automatically. I created a symlink using: ln -s /media/MYDISK /home/camilo/MYDISK chmod 777 /home/camilo/MYDISK I'm setting up smb.conf like this: [myshare1] comment = external disk browsable = yes path = /home/camilo/MYDISK guest ok = yes read only = no create mask = 0775 Also, in the [global] section I tried adding the following lines: follow symlinks = yes wide links = yes unix extensions = no The problem is that when browsing the shared folder in Windows 7, I get a "\\etc\myshare1 is not accessible" error. When pointing the path to a regular folder it works fine. Also, when I point it directly to /media/MYDISK, it shows the same error. EDIT: to make it more interesting, I have no graphical interface, so I need to touch the config files directly..

    Read the article

< Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >