Search Results

Search found 79 results on 4 pages for 'hao wooi lim'.

Page 1/4 | 1 2 3 4  | Next Page >

  • C# Why can't I find Sum() of this HashSet. says "Arithmetic operation resulted in an overflow."

    - by user2332665
    I was trying to solve this problem projecteuler,problem125 this is my solution in python(just for understanding the logic) import math lim=10**8 found=set() for start in xrange(1,int(math.sqrt(lim))): sos = start*start for i in xrange(start+1,int(math.sqrt(lim))): sos += (i*i) if sos >= lim: break s=str(int(sos)) if s==s[::-1]: found.add(sos) print sum(found) the same code I wrote in C# is as follows using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ConsoleApplication1 { class Program { public static bool isPalindrome(string s) { string temp = ""; for (int i=s.Length-1;i>=0;i-=1){temp+=s[i];} return (temp == s); } static void Main(string[] args) { int lim = Convert.ToInt32(Math.Pow(10,8)); var found = new HashSet<int>(); for (int start = 1; start < Math.Sqrt(lim); start += 1) { int s = start *start; for (int i = start + 1; start < Math.Sqrt(lim); i += 1) { s += i * i; if (s > lim) { break; } if (isPalindrome(s.ToString())) { found.Add(s); } } } Console.WriteLine(found.Sum()); } } } the code debugs fine until it gives an exception at Console.WriteLine(found.Sum()); (line31). Why can't I find Sum() of the set found

    Read the article

  • How to draw line inside a scatter plot

    - by ruffy
    I can't believe that this is so complicated but I tried and googled for a while now. I just want to analyse my scatter plot with a few graphical features. For starters, I want to add simply a line. So, I have a few (4) points and like in this plot [1] I want to add a line to it. http://en.wikipedia.org/wiki/File:ROC_space-2.png [1] Now, this won't work. And frankly, the documentation-examples-gallery combo and content of matplotlib is a bad source for information. My code is based upon a simple scatter plot from the gallery: # definitions for the axes left, width = 0.1, 0.85 #0.65 bottom, height = 0.1, 0.85 #0.65 bottom_h = left_h = left+width+0.02 rect_scatter = [left, bottom, width, height] # start with a rectangular Figure fig = plt.figure(1, figsize=(8,8)) axScatter = plt.axes(rect_scatter) # the scatter plot: p1 = axScatter.scatter(x[0], y[0], c='blue', s = 70) p2 = axScatter.scatter(x[1], y[1], c='green', s = 70) p3 = axScatter.scatter(x[2], y[2], c='red', s = 70) p4 = axScatter.scatter(x[3], y[3], c='yellow', s = 70) p5 = axScatter.plot([1,2,3], "r--") plt.legend([p1, p2, p3, p4, p5], [names[0], names[1], names[2], names[3], "Random guess"], loc = 2) # now determine nice limits by hand: binwidth = 0.25 xymax = np.max( [np.max(np.fabs(x)), np.max(np.fabs(y))] ) lim = ( int(xymax/binwidth) + 1) * binwidth axScatter.set_xlim( (-lim, lim) ) axScatter.set_ylim( (-lim, lim) ) xText = axScatter.set_xlabel('FPR / Specificity') yText = axScatter.set_ylabel('TPR / Sensitivity') bins = np.arange(-lim, lim + binwidth, binwidth) plt.show() Everything works, except the p5 which is a line. Now how is this supposed to work? What's good practice here?

    Read the article

  • Why is my implementation of the Sieve of Atkin overlooking numbers close to the specified limit?

    - by Ross G
    My implementation either overlooks primes near the limit or composites near the limit. while some limits work and others don't. I'm am completely confused as to what is wrong. def AtkinSieve (limit): results = [2,3,5] sieve = [False]*limit factor = int(math.sqrt(lim)) for i in range(1,factor): for j in range(1, factor): n = 4*i**2+j**2 if (n <= lim) and (n % 12 == 1 or n % 12 == 5): sieve[n] = not sieve[n] n = 3*i**2+j**2 if (n <= lim) and (n % 12 == 7): sieve[n] = not sieve[n] if i>j: n = 3*i**2-j**2 if (n <= lim) and (n % 12 == 11): sieve[n] = not sieve[n] for index in range(5,factor): if sieve[index]: for jndex in range(index**2, limit, index**2): sieve[jndex] = False for index in range(7,limit): if sieve[index]: results.append(index) return results For example, when I generate a primes to the limit of 1000, the Atkin sieve misses the prime 997, but includes the composite 965. But if I generate up the limit of 5000, the list it returns is completely correct.

    Read the article

  • Why is my implementation of the Sieve of Atkin overlooking numbers close to the specified limit?

    - by Ross G
    My implementation either overlooks primes near the limit or composites near the limit. while some limits work and others don't. I'm am completely confused as to what is wrong. def AtkinSieve (limit): results = [2,3,5] sieve = [False]*limit factor = int(math.sqrt(lim)) for i in range(1,factor): for j in range(1, factor): n = 4*i**2+j**2 if (n <= lim) and (n % 12 == 1 or n % 12 == 5): sieve[n] = not sieve[n] n = 3*i**2+j**2 if (n <= lim) and (n % 12 == 7): sieve[n] = not sieve[n] if i>j: n = 3*i**2-j**2 if (n <= lim) and (n % 12 == 11): sieve[n] = not sieve[n] for index in range(5,factor): if sieve[index]: for jndex in range(index**2, limit, index**2): sieve[jndex] = False for index in range(7,limit): if sieve[index]: results.append(index) return results For example, when I generate a primes to the limit of 1000, the Atkin sieve misses the prime 997, but includes the composite 965. But if I generate up the limit of 5000, the list it returns is completely correct.

    Read the article

  • catching a deadlock in a simple odd-even sending

    - by user562264
    I'm trying to solve a simple problem with MPI, my implementation is MPICH2 and my code is in fortran. I have used the blocking send and receive, the idea is so simple but when I run it it crashes!!! I have absolutely no idea what is wrong? can anyone make quote on this issue please? there is a piece of the code: integer,parameter::IM=100,JM=100 REAL,ALLOCATABLE ::T(:,:),TF(:,:) CALL MPI_COMM_RANK(MPI_COMM_WORLD,RNK,IERR) CALL MPI_COMM_SIZE(MPI_COMM_WORLD,SIZ,IERR) prv = rnk-1 nxt = rnk+1 LIM = INT(IM/SIZ) IF (rnk==0) THEN ALLOCATE(TF(IM,JM)) prv = MPI_PROC_NULL ELSEIF(rnk==siz-1) THEN NXT = MPI_PROC_NULL LIM = LIM+MOD(IM,SIZ) END IF IF (MOD(RNK,2)==0) THEN CALL MPI_SEND(T(2,:),JM+2,MPI_REAL,PRV,10,MPI_COMM_WORLD,IERR) CALL MPI_RECV(T(1,:),JM+2,MPI_REAL,PRV,20,MPI_COMM_WORLD,STAT,IERR) ELSE CALL MPI_RECV(T(LIM+2,:),JM+2,MPI_REAL,NXT,10,MPI_COMM_WORLD,STAT,IERR) CALL MPI_SEND(T(LIM+1,:),JM+2,MPI_REAL,NXT,20,MPI_COMM_WORLD,IERR) END IF as I understood even processes are not receiving anything while the odd ones finish sending successfully, in some cases when I added some print to observe what is going on I saw that the variable NXT is changing during the sending procedure!!! for example all the odd process was sending message to process 0 not their next one!

    Read the article

  • Help with dynamic range compression function (audio)

    - by MusiGenesis
    I am writing a C# function for doing dynamic range compression (an audio effect that basically squashes transient peaks and amplifies everything else to produce an overall louder sound). I have written a function that does this (I think): public static void Compress(ref short[] input, double thresholdDb, double ratio) { double maxDb = thresholdDb - (thresholdDb / ratio); double maxGain = Math.Pow(10, -maxDb / 20.0); for (int i = 0; i < input.Length; i += 2) { // convert sample values to ABS gain and store original signs int signL = input[i] < 0 ? -1 : 1; double valL = (double)input[i] / 32768.0; if (valL < 0.0) { valL = -valL; } int signR = input[i + 1] < 0 ? -1 : 1; double valR = (double)input[i + 1] / 32768.0; if (valR < 0.0) { valR = -valR; } // calculate mono value and compress double val = (valL + valR) * 0.5; double posDb = -Math.Log10(val) * 20.0; if (posDb < thresholdDb) { posDb = thresholdDb - ((thresholdDb - posDb) / ratio); } // measure L and R sample values relative to mono value double multL = valL / val; double multR = valR / val; // convert compressed db value to gain and amplify val = Math.Pow(10, -posDb / 20.0); val = val / maxGain; // re-calculate L and R gain values relative to compressed/amplified // mono value valL = val * multL; valR = val * multR; double lim = 1.5; // determined by experimentation, with the goal // being that the lines below should never (or rarely) be hit if (valL > lim) { valL = lim; } if (valR > lim) { valR = lim; } double maxval = 32000.0 / lim; // convert gain values back to sample values input[i] = (short)(valL * maxval); input[i] *= (short)signL; input[i + 1] = (short)(valR * maxval); input[i + 1] *= (short)signR; } } and I am calling it with threshold values between 10.0 db and 30.0 db and ratios between 1.5 and 4.0. This function definitely produces a louder overall sound, but with an unacceptable level of distortion, even at low threshold values and low ratios. Can anybody see anything wrong with this function? Am I handling the stereo aspect correctly (the function assumes stereo input)? As I (dimly) understand things, I don't want to compress the two channels separately, so my code is attempting to compress a "virtual" mono sample value and then apply the same degree of compression to the L and R sample value separately. Not sure I'm doing it right, however. I think part of the problem may the "hard knee" of my function, which kicks in the compression abruptly when the threshold is crossed. I think I may need to use a "soft knee" like this: Can anybody suggest a modification to my function to produce the soft knee curve?

    Read the article

  • Server 2008 R2 .Net 4 Programs running from Network share won't work

    - by Lim
    we have several .Net Apps stored on a Network share on our old Server (Windows Server 2003). These Apps get called by Windows 7 32bit Clients and work perfectly fine. We recently wanted to upgrade our Server to Windows Server 2008 R2 and moved the Apps from the old share to our new share on the new Server. The problem started right afterwards since all of our .Net Programs won't start anymore from the new share (2008 R2). The Error that comes up is: "This application could not be started." and with a link to Microsoft: http://support.microsoft.com/kb/2715633 (This Error also shows up when i start a simple console HelloWorld from the server share created in VS2012) Does anyone have any Idea why this is not working anymore on the new Servershare? We've been searching for two days now with no result at all. Any Tips / Help / Questions appreciated :) Greetings Lim

    Read the article

  • Printing Arrays from Structs

    - by Carlll
    I've been stumped for a few hours on an exercise where I must use functions to build up an array inside a struct and print it. In my current program, it compiles but crashes upon running. #define LIM 10 typedef char letters[LIM]; typedef struct { int counter; letters words[LIM]; } foo; int main(int argc, char **argv){ foo apara; structtest(apara, LIM); print_struct(apara); } int structtest(foo *p, int limit){ p->counter = 0; int i =0; for(i; i< limit ;i++){ strcpy(p->words[p->counter], "x"); //only filling arrays with 'x' as an example p->counter ++; } return; I do believe it's due to my incorrect usage/combination of pointers. I've tried adjusting them, but either an 'incompatible types' error is produced, or the array is seemingly blank } void print_struct(foo p){ printf(p.words); } I haven't made it successfully up to the print_struct stage, but I'm unsure whether p.words is the correct item to be calling. In the output, I would expect the function to return an array of x's. I apologize in advance if I've made some sort of grievous "I should already know this" C mistake. Thanks for your help.

    Read the article

  • pthreads_setaffinity_np: Invalid argument?

    - by hahuang65
    I've managed to get my pthreads program sort of working. Basically I am trying to manually set the affinity of 4 threads such that thread 1 runs on CPU 1, thread 2 runs on CPU 2, thread 3 runs on CPU 3, and thread 4 runs on CPU 4. After compiling, my code works for a few threads but not others (seems like thread 1 never works) but running the same compiled program a couple of different times gives me different results. For example: hao@Gorax:~/Desktop$ ./a.out Thread 3 is running on CPU 3 pthread_setaffinity_np: Invalid argument Thread Thread 2 is running on CPU 2 hao@Gorax:~/Desktop$ ./a.out Thread 2 is running on CPU 2 pthread_setaffinity_np: Invalid argument pthread_setaffinity_np: Invalid argument Thread 3 is running on CPU 3 Thread 3 is running on CPU 3 hao@Gorax:~/Desktop$ ./a.out Thread 2 is running on CPU 2 pthread_setaffinity_np: Invalid argument Thread 4 is running on CPU 4 Thread 4 is running on CPU 4 hao@Gorax:~/Desktop$ ./a.out pthread_setaffinity_np: Invalid argument My question is "Why does this happen? Also, why does the message sometimes print twice?" Here is the code: #define _GNU_SOURCE #include <stdio.h> #include <pthread.h> #include <stdlib.h> #include <sched.h> #include <errno.h> #define handle_error_en(en, msg) \ do { errno = en; perror(msg); exit(EXIT_FAILURE); } while (0) void *thread_function(char *message) { int s, j, number; pthread_t thread; cpu_set_t cpuset; number = (int)message; thread = pthread_self(); CPU_SET(number, &cpuset); s = pthread_setaffinity_np(thread, sizeof(cpu_set_t), &cpuset); if (s != 0) { handle_error_en(s, "pthread_setaffinity_np"); } printf("Thread %d is running on CPU %d\n", number, sched_getcpu()); exit(EXIT_SUCCESS); } int main() { pthread_t thread1, thread2, thread3, thread4; int thread1Num = 1; int thread2Num = 2; int thread3Num = 3; int thread4Num = 4; int thread1Create, thread2Create, thread3Create, thread4Create, i, temp; thread1Create = pthread_create(&thread1, NULL, (void *)thread_function, (char *)thread1Num); thread2Create = pthread_create(&thread2, NULL, (void *)thread_function, (char *)thread2Num); thread3Create = pthread_create(&thread3, NULL, (void *)thread_function, (char *)thread3Num); thread4Create = pthread_create(&thread4, NULL, (void *)thread_function, (char *)thread4Num); pthread_join(thread1, NULL); pthread_join(thread2, NULL); pthread_join(thread3, NULL); pthread_join(thread4, NULL); return 0; }

    Read the article

  • Saving game data to server [on hold]

    - by Eugene Lim
    What's the best method to save the player's data to the server? Method to store the game saves Which one of the following method should I use ? Using a database structure(e.g.. mySQL) to store the game data as blobs? Using the server hard disk to store the saved game data as binary data files? Method to send saved game data to server What method should I use ? socketIO web socket a web-based scripting language to receive the game data as binary? for example, a php script to handle binary data and save it to file Meta-data I read that some games store saved game meta-data in database structures. What kind of meta data is useful to store?

    Read the article

  • Ubuntu 12.10 Wine Crash Error Help

    - by Hao
    I'm trying to get wine to work for Hawken. I used winetricks and installed all the necessary components. Wine works fine with other games I have installed but when I try to run Hawken it crashes while loading with the following error: fixme:iphlpapi:CancelIPChangeNotify (overlapped 0xf2afb8): stub I'm new to Ubuntu so I have no idea how to fix this. Any help would be much appreciated. *Update: I've done some backtracking and it seems that this error is caused by the launcher not wine itself.

    Read the article

  • How to expand URLs in C#?

    - by Hao Wooi Lim
    If I have a URL like http://popurls.com/go/msn.com/l4eba1e6a0ffbd9fc915948434423a7d5, how do I expand it back to the original URL programmatically? Obviously I could use an API like expandurl.com but that will limits me to 100 requests per hour.

    Read the article

  • Recasting and Drawing in SDL

    - by user1078123
    I have some code that essentially draws a column on the screen of a wall in a raycasting-type 3d engine. I am trying to optimize it, as it takes about 10 milliseconds do draw a million pixels using this, and the vast majority of game time is spent in this loop. However, I don't quite understand what's occurring, particularly the recasting (I modified the "pixel manipulation" sample code from the SDL documentation). "canvas" is the surface I am drawing to, and "hello" is the surface containing the texture for the column. int c = (curcol)* canvas->format->BytesPerPixel; void *canvaspixels = canvas->pixels; Uint16 texpitch = hello->pitch; int lim = (drawheight +startdraw) * canvpitch +c + (int) canvaspixels; Uint8 *k = (Uint8 *)hello->pixels + (hit)* hello->format->BytesPerPixel; for (int j= (startdraw)*(canvpitch)+c + (int) canvaspixels; (j< lim); j+= canvpitch){ Uint8 *q = (Uint8 *) ((int(h))*(texpitch)+k); *(Uint32 *)j = *(Uint32 *)q; h += s; } We have void pointers (not sure how those are even represented), 8, 16, and 32 bit ints (h and s are floats), all being intermingled, and while it works, it is quite confusing.

    Read the article

  • Apache MINA NIO connector help

    - by satya
    I'm new to using MINA. I've a program which uses MINA NIOconnector to connect to host. I'm able to send data and also receive. This is clear from log4j log which i'm attaching below. E:\>java TC4HostClient [12:21:46] NioProcessor-1 INFO [] [] [org.apache.mina.filter.logging.LoggingFil ter] - CREATED [12:21:46] NioProcessor-1 INFO [] [] [org.apache.mina.filter.logging.LoggingFil ter] - OPENED Opened CGS Sign On [12:21:46] NioProcessor-1 INFO [] [] [org.apache.mina.filter.logging.LoggingFil ter] - SENT: HeapBuffer[pos=0 lim=370 cap=512: 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20...] [12:21:46] NioProcessor-1 INFO [] [] [org.apache.mina.filter.logging.LoggingFil ter] - SENT: HeapBuffer[pos=0 lim=0 cap=0: empty] Message Sent 00000333CST 1001010 00000308000003080010000 000009600000000FTS O00000146TC4DS 001WSJTC41 ---001NTMU9001-I --- -----000 0030000000012400000096500007013082015SATYA 500000 010165070000002200011 01800000000022000001241 172.16.25.122 02 [12:21:46] NioProcessor-1 INFO [] [] [org.apache.mina.filter.logging.LoggingFil ter] - RECEIVED: HeapBuffer[pos=0 lim=36 cap=2048: 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20...] [12:21:46] NioProcessor-1 INFO [] [] [org.apache.mina.filter.logging.LoggingFil ter] - RECEIVED: HeapBuffer[pos=0 lim=505 cap=2048: 31 20 20 20 20 20 20 20 20 3 0 30 30 30 30 34 38...] After Writing [12:21:52] NioProcessor-1 INFO [] [] [org.apache.mina.filter.logging.LoggingFil ter] - CLOSED Though i see "RECEIVED" in log my handler messageReceived method is not being called. Can anyone please help me in this regard and tell me what i'm doing wrong import java.io.IOException; import java.net.InetSocketAddress; import java.nio.charset.Charset; import java.net.SocketAddress; import org.apache.mina.core.service.IoAcceptor; import org.apache.mina.core.session.IdleStatus; import org.apache.mina.filter.codec.ProtocolCodecFilter; import org.apache.mina.filter.codec.textline.TextLineCodecFactory; import org.apache.mina.filter.logging.LoggingFilter; import org.apache.mina.transport.socket.nio.NioSocketConnector; import org.apache.mina.core.session.IoSession; import org.apache.mina.core.future.*; public class TC4HostClient { private static final int PORT = 9123; public static void main( String[] args ) throws IOException,Exception { NioSocketConnector connector = new NioSocketConnector(); SocketAddress address = new InetSocketAddress("172.16.25.3", 8004); connector.getSessionConfig().setReadBufferSize( 2048 ); connector.getFilterChain().addLast( "logger", new LoggingFilter() ); connector.getFilterChain().addLast( "codec", new ProtocolCodecFilter( new TextLineCodecFactory( Charset.forName( "UTF-8" )))); connector.setHandler(new TC4HostClientHandler()); ConnectFuture future1 = connector.connect(address); future1.awaitUninterruptibly(); if (!future1.isConnected()) { return ; } IoSession session = future1.getSession(); System.out.println("CGS Sign On"); session.getConfig().setUseReadOperation(true); session.write(" 00000333CST 1001010 00000308000003080010000000009600000000FTS O00000146TC4DS 001WSJTC41 ---001NTMU9001-I --------000 0030000000012400000096500007013082015SATYA 500000 010165070000002200011 01800000000022000001241 172.16.25.122 02"); session.getCloseFuture().awaitUninterruptibly(); System.out.println("After Writing"); connector.dispose(); } } import org.apache.mina.core.session.IdleStatus; import org.apache.mina.core.service.IoHandlerAdapter; import org.apache.mina.core.session.IoSession; import org.apache.mina.core.buffer.IoBuffer; public class TC4HostClientHandler extends IoHandlerAdapter { @Override public void exceptionCaught( IoSession session, Throwable cause ) throws Exception { cause.printStackTrace(); } @Override public void messageSent( IoSession session, Object message ) throws Exception { String str = message.toString(); System.out.println("Message Sent" + str); } @Override public void messageReceived( IoSession session, Object message ) throws Exception { IoBuffer buf = (IoBuffer) message; // Print out read buffer content. while (buf.hasRemaining()) { System.out.print((char) buf.get()); } System.out.flush(); } /* @Override public void messageReceived( IoSession session, Object message ) throws Exception { String str = message.toString(); System.out.println("Message Received : " + str); }*/ @Override public void sessionIdle( IoSession session, IdleStatus status ) throws Exception { System.out.println( "IDLE " + session.getIdleCount( status )); } public void sessionClosed(IoSession session){ System.out.println( "Closed "); } public void sessionOpened(IoSession session){ System.out.println( "Opened "); } }

    Read the article

  • Entering BIOs Setup from Supermicro IPMI KVM

    - by Joshua Lim
    I'm having trouble getting into BIOs Setup from Supermicro IPMI "KVM" - Remote Control Console Redirection. I need to change the boot order to CDROM first. I'm running Windows 2008 server. After some Googling, it says here that the method is to: Press TAB to enter Setup screen. Press Esc twice to take effect. http://www.supermicro.com/support/faqs/faq.cfm?faq=6222 A month ago, I tried that 30-40 times + DEL, over 2 hours, it worked. Now, I've been trying the same key combination for more than an hour, rebooting each time it failed, it still doesn't work. Note: I've only got a notebook computer, no extra monitor.

    Read the article

  • What CPU hardware performance counter tool do you guys use?

    - by Hao Shen
    I just want to know the popular tool that I can use. Originally, I used Perfmon2. However, recently, I installed the lastest Ubuntu 12 on my Ivy Bridge i7 machine. Then there is some compilation error for the Pfmon. :< From here http://comments.gmane.org/gmane.comp.linux.perfmon2.devel/3255 , it seems that the Perfmon2 will not work after the kernel 2.6.30. So it suggests to use perf tool. I just want to confirm is there any popular application level performance counter monitor tool which can be used for the later kernel version?

    Read the article

  • Will this increase my Virtual private Server failing rate ?

    - by Spencer Lim
    Will this increase my Virtual private Server failing rate if i :- install Microsoft Window Server 2008 Enterprise install SQL server enterprise 2008 install IIS 7.5 install ASP.Net Mvc 2 install Microsoft Exchange << should live inside MWS2008 ? or standalone without OS? install Team foundation server << should live inside MWS2008 ? or standalone without OS? on one mini VPS with specification of DELL Poweredge R710 shared plan DDR3 ECC RAMs 16GB and -- 1GB for this VPS using DELL PERC 6i raid controller (this thing alone about 1.5k-2k) and the SAS HDD (15K RPM) (146GB) -- 33GB to this VPS each hdd is freaking fast over 300MB read / write possible with proper tuning the motherboard is a DELL and it has twin redundant PSU (870watt 85%eff) its running on Intel Xeon 5502 (Quad Core) x2 so about 8 physical proc (fairly share) is there any ruler to measure for this about one VPS can only install what what what service ? because of my resource is limited =.@ may i know if it is install in this way,maybe it seem like defeat the way of "VPS"... what will happen ? or any guideline on this issue (fully configuring the window server 2008 R2) ? Thx for reply

    Read the article

  • printing foreign text for PHP on UBUNTU and CENTOS

    - by hao
    Hey guys, I am using domdocuments and using things like $div-nodeValue to obtain certain info from a web page. On my ubuntu machine when i do php crawl.php everything is displayed properly in Chinese (the page is in UTF-8). However on my CENTOS machine using the same code I get æ´å¤åå¸ when I print in the terminal. and when I save it to the database, the characters are also messed up. One thing I noticed is that when I do print $content, both systems display them properly.

    Read the article

  • How do i setup a Window server 2008 R2 + SQL server 2008 VPS ?

    - by Spencer Lim
    I wish to deploy a trusted apps at the secured way... i got one empty VPS (no operating system) but i don't know how could i install Window server 2008 R2 and SQL server 2008 the version i got is genuine enterprise/ datacenter and sql enterprise the main purpose is used to deploy ASP.Net v4 MVC 2 and XBAP Apps + LINQ also use SQL server for my window application with someway to make it able to remote access May i know anyone here could teach me or introduce some source for me on how to setup the domain, IP, OS and feature all thing, please... i felt confuse here @.@

    Read the article

  • PHP cannot connect to AWS RDS server

    - by Eugene Lim
    I have a EC2 Instance with apchane, PHP and phpmyadmin. I have connect phpmyadmin to manage the aws RDS server. they are in the same security group. But when i try to use a php script to connect to the AWS rds server, it gave me SQLSTATE[HY000] [2003] Can't connect to MySQL server on (xxx.xxx.xxx.xxx). I did some researched, and most of them says use setsebool -P httpd_can_network_connect=1 reference: http://www.filonov.com/2009/08/07/sqlstatehy000-2003-cant-connect-to-mysql-server-on-xxx-xxx-xxx-xxx-13/ But i have no idea which server to configure? and how to?

    Read the article

  • Will this increase my VPS failing rate ?

    - by Spencer Lim
    Will this increase my Virtual private Server failing rate if i :- install Microsoft Window Server 2008 Enterprise install SQL server enterprise 2008 install IIS 7.5 install ASP.Net Mvc 2 install Microsoft Exchange install Team foundation server on one mini VPS with specification of DELL Poweredge R710 shared plan DDR3 ECC RAMs 16GB and -- 1GB for this VPS using DELL PERC 6i raid controller (this thing alone about 1.5k-2k) and the SAS HDD (15K RPM) (146GB) -- 33GB to this VPS each hdd is freaking fast over 300MB read / write possible with proper tuning the motherboard is a DELL and it has twin redundant PSU (870watt 85%eff) its running on Intel Xeon 5502 (Quad Core) x2 so about 8 physical proc (fairly share) is there any ruler for this about one VPS can only install what what what service ? Thx for reply

    Read the article

  • Why does my DSDT table is different from what I found online?

    - by Hao Shen
    I have found a field in DSDT table where I want to modify from here http://www.ztex.de/misc/c2ctl.e.html Generally, I want to modify the _PSS field about the processor so that I can have more frequency levels available in the CPUfreq driver interface. I try to use this command to dissemble the DSDT table from my Desktop(Linux2.6.29,Intel CORE 2): cat /proc/acpi/dsdt > dsdt.aml iasl -d dsdt.aml Then I have a file dsdt.dsl as following(very long, so I just show the beginning of the file): /* * Intel ACPI Component Architecture * AML Disassembler version 20090123 * * Disassembly of dsdt.aml, Mon May 6 20:41:40 2013 * * * Original Table Header: * Signature "DSDT" * Length 0x00003794 (14228) * Revision 0x01 **** ACPI 1.0, no 64-bit math support * Checksum 0x46 * OEM ID "DELL" * OEM Table ID "dt_ex" * OEM Revision 0x00001000 (4096) * Compiler ID "INTL" * Compiler Version 0x20050624 (537200164) */ DefinitionBlock ("dsdt.aml", "DSDT", 1, "DELL", "dt_ex", 0x00001000) { Method (DBIN, 0, NotSerialized) { Noop } Scope (\) { Device (_SB.VBTN) ................... But I can not find the _PSS field as shown in the website I have given above. I do not know why? I am sure the current cpufreq driver shows 4 frequency levels available. So at least there should be something in the table showing this..right? Has anybody here played with the DSDT table before? Thanks,

    Read the article

  • How to run a DHCP service on Windows 7 Home

    - by Joshua Lim
    I'm trying to setup a DHCP server on Windows 7 Home, tried using a couple of freeware which I found on the Internet but none seemed to work. What I did: On the Windows 7 machine which I install the DHCP Server with the range 192.168.1.12-192.168.1.256. I set the Gigabit Ethernet adapter to a static IP address of 192.168.1.11 and subnet mask of 255.255.255.0. When I did an IP config, it showed. Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Broadcom NetLink (TM) Gigabit Ethernet Physical Address. . . . . . . . . : 8C-73-6E-75-A7-56 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::196d:b6bb:8f93:2555%12(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.1.11(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1 fec0:0:0:ffff::2%1 fec0:0:0:ffff::3%1 NetBIOS over Tcpip. . . . . . . . : Enabled I connected another Window 7 machine to the "DHCP server" using a cross cable and set network adapter on that machine to automatically detect IP address. The client fails to acquire the correct IP address from the DHCP server and showed the autoconfigured IPv4 address instead. Here's the information returned by config /all on the client machine. Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Atheros AR8152/8158 PCI-E Fast Ethernet Controller (NDIS 6.20) Physical Address. . . . . . . . . : 54-04-A6-40-96-4B DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::4885:4082:5572:5a85%12(Preferred) Autoconfiguration IPv4 Address. . : 169.254.90.133(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.0.0 Default Gateway . . . . . . . . . : DHCPv6 IAID . . . . . . . . . . . : 341050534 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-17-54-78-12-00-08-CA-46-4C-5A DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1 fec0:0:0:ffff::2%1 fec0:0:0:ffff::3%1 NetBIOS over Tcpip. . . . . . . . : Enabled DHCP client are running on both machines. I've tried many times but failed. Googling also returned no useful information for my scenario. Have I missed out any step? Thanks.

    Read the article

  • How to configure networking on an appliance such that it can plug and play on any corporate network?

    - by Joshua Lim
    I had a chance to configure a Moxa NPort device server appliance on my client's network, it was very easy to do so, done in just 2 minutes. Here's what I did:- The Moxa device server had a preset IP address of 192.168.127.254 and subnet mask 255.255.255.0 - http://www.moxa.com/doc/manual/nport/5400/NPort_5400_Series_Users_Manual_v4.pdf Moxa provides a Windows software which I used to "scan" for the device server. It worked like magic! The software returns a list of device servers found. Each device server is identified by MAC address, and by selecting the device server using the software, I can reset the default IP address and subnet mask of that device server! In comparison, during an earlier project, I spent 2 hours trying to get KVM to work for a Windows 7 embedded appliance I'm trying to install in my client's network - http://superuser.com/questions/380305/how-to-configure-windows-7-professional-appliance-pc-on-my-clients-network-usin Prior to that, I have already tried pre-configuring the IP address and subnet mask to the one which my client provided, yet the appliance still can't connect to the client's network! I've also tried cross cable, didn't work either. After KVM worked, I discovered that the network settings were "lost" after I plug the machine into the client's network. Now my question is what can I do to setup my Windows 7 embedded appliance so that it can connect to any network like that the Moxa device server? I tried experimenting this on my network using a Windows machine configured to an IP address of 192.168.127.254 and subnet mask 255.255.255.0, but it doesn't connect to my network that uses 192.168.0.*. :( EDIT: I would like to point out that the Moxa Windows configuration software seems to be able to connect to any Moxa device connected to the network even if it is on a different subnet, as long as the network adapter shows "connected". This is important because the Moxa device has no VGSM port or interface to configure the IP address.

    Read the article

1 2 3 4  | Next Page >