Is Computer Science science, applied mathematics, engineering, art, philosophy? "Other"?
To provide background, here is Steven Wartik's blog posting for Scientific American titled "I'm not a real scientist, and that's okay." The article covers some good topics for this question, but it leaves open more than it answers.
If you can think of the discipline, how would computer science fit into its definition? Should the discipline for Computer Science be based on what programmers do, or what academics do? What kind of answers do you get from people who've seemed to think deeply about this? What reasons do they give?
I am learning Python and am intrigued by the following point in PEP 20 The Zen of Python:
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Could anyone offer any concrete examples of this maxim? I am particularly interested in the contrast to other languages such as Ruby. Part of the Ruby design philosophy (originating with Perl, I think?) is that multiple ways of doing it is A Good Thing. Can anyone offer some examples showing the pros and cons of each approach. Note, I'm not after an answer to which is better (which is probably too subjective to ever be answered), but rather an unbiased comparison of the two styles.
As recently reported here:
Xamarin has forked Cocos2D-XNA, a 2D/3D game development framework,
creating a cross-platform library that can be included in PCL
projects.
However the founder of the project that was forked says:
The purpose of the MIT license is to unencumber your fair use. Not to
encourage you to take software, rebrand it as your own, and then "take
it in a new direction" as you say.
While not illegal, it is unethical.
It seems that the GitHub page of the new project doesn't even indicate that it's a fork in a typical GitHub manner, opting for an easily-removable History section instead (see bottom).
So my questions are:
Was Xamarin's action and the way the action was done ethical or not?
Is it possible to avoid such a situation if you are a single
developer or a small unfunded group of developers?
I am hoping this could be either a wiki question or there will be some objective answers grounded on modern OSS ethics/philosophy.
PS Command output is truncated in solaris. I tried the below command after googling out. It doesn't work. Not sure what needs to be done
/usr/ucb/ps awwx
Using Solaris
I have a monitoring script that uses other scripts as plugins.
Theses pugins are also scripts which work in difffernt ways like:
1. Sending an alert while high memory uilization
2. High Cpu usage
3. Full disk Space
4. chekcking the core file dump
Now all this is dispalyed on my terminal and I want to put them in a HTML file/format and send it as a body of the mail not as attachment.
Thanks .
The function which creates shared memory in *inux programming takes a key as one of its parameters..
What is the meaning of this key? And How can I use it?
Edit:
Not shared memory id
An Apache 2.x Webserver with default configurations from the ubuntu/debian repositories will use the www-data unix account for apache2 processes handling web requests. Assuming that apache is serving two different sites (domain1.com and domain2.com), is it possible for apache to use unix user www-data1 when handling requests to domain1.com, and use unix user www-data2 when handling requests to domain2.com? The motivation is to isolate the code for each domain name from one another.
I need to format some hexdump like this:
00010: 02 03 04 05
00020: 02 03 04 08
00030: 02 03 04 08
00010: 02 03 04 05
00020: 02 03 04 05
02 03 04 05
02 03 04 08
to
02 03 04 05
02 03 04 08
02 03 04
02 03 04 05
02 03 04 05
02 03 04 05
02 03 04
a) remove the address fields, if present
b) remove any 08 at the end of a paragraph (followed by an empty line)
c) remove any empty lines
How can this be done using lex? thanks!
Hi, I was writing a script to check if a number is Armstrong or not. This is my Code
echo "Enter Number"
read num
sum=0
item=$num
while [ $item -ne 0 ]
do
rem='expr $item % 10'
cube='expr $rem \* $rem \* $rem'
sum='expr $sum + $cube'
item='expr $item / 10'
done
if [ $sum -eq $num ]
then
echo "$num is an Amstrong Number"
else
echo "$num is not an Amstrong Number"
fi
After I run this script,
$ ./arm.sh
I always get this error
./arm.sh: line 5: [: too many arguments
./arm.sh: line 12: [: too many arguments
I am on cygwin.
I have a unix timestamp for the current time. I want to get the unix timestamp for the start of the next day.
$current_timestamp = time();
$allowable_start_date = strtotime('+1 day', $current_timestamp);
As I am doing it now, I am simply adding 1 whole entire day to the unix timestamp, when instead I would like to figure out how many seconds are left in this current day, and only add that many seconds in order to get the unix timestamp for the very first minute of the next day.
What is the best way to go about this?
I am writing a program on linux gcc...
When I tried to include <math.h> I found that I need to link math library by using command gcc -lm
But I am searching for another way to link the math library 'in code', that does not require the user to compile using any options..
Can gcc -lm be done in c code using #pragma or something?
EDIT:
I have changed -ml to -lm
This script is to connect to different servers and copy a file from a loaction defined.
It is mandatory to use sftp and not ftp.
#!/usr/bin/ksh -xvf
Detail="jyotibo|snv4915|/tlmusr1/tlm/rt/jyotibo/JyotiBo/ jyotibo|snv4915|/tlmusr1/tlm/rt/jyotibo/JyotiBo/"
password=Unix11!
c_filename=import.log
localpath1=`pwd`
for i in $Detail
do
echo $i
UserName=`echo $i | cut -d'|' -f1`
echo $UserName
remotehost=`echo $i | cut -d'|' -f2`
echo $remotehost
remote_path=`echo $i | cut -d'|' -f3`
echo $remote_path
{
echo "open $remotehost
user $UserName $password
lcd $localpath1
cd $remote_path
bi
prompt
mget $c_filename
prompt
"
} |ftp -i -n -v 2>&1
done
I want to do the similar thing using sftp instead of ftp.
I have a text file with contents as below:
1,A,100
2,A,200
3,B,150
4,B,100
5,B,250
i need the output as :
A,300
B,500
the logic here is sum of all the 3rd fields whose 2nd field is A and in the same way for B
how could we do it using awk?
I want grep for a particular work in multiple files. Multiple files are stored in variable testing.
TESTING=$(ls -tr *.txt)
echo $TESTING
test.txt ab.txt bc.txt
grep "word" "$TESTING"
grep: can't open test.txt
ab.txt
bc.txt
Giving me an error. Is there any other way to do it other than for loop
i have 4 processes:
p1 - bursts 5, priority: 3
p2 - bursts 8, priority: 2
p3 - bursts 12, priority: 2
p4 - bursts 6, priority: 1
Assuming that all processes arrive at the scheduler at the same time what is the average response time and average turnaround time?
For FCFS is it ok to have them in the order p1, p2, p3, p4 in the execution queue?
I need to call another shell script testarg.sh within my main script. This script testarg.sh has arguments ARG1 ,ARG2, ARG3. I need to call up the below way
./testarg.sh -ARG1 -ARG2 -ARG3
ARG1 and ARG3 argument Variables are mandatory ones. If its not passed to the main script then i quit. ARG2 is an optional one. If the ARG2 variable is not set with value or it's not defined then i need not pass it from main script.So i need to call up the below way
./testarg.sh -ARG1 -ARG3
If the value exist for the ARG2 Varibale then i need to call the below way
./testarg.sh -ARG1 -ARG2 -ARG3
Do i need to have a if else statement for checking the ARG2 variable is empty or null. Is there any other way to do it.
Hi,
I have a problem while implementing a fuse filesystem in python.
for now i just have a proxy filesystem, exactly like a mount --bind would be.
But, any file created, opened, or read on my filesystem is not released (the corresponding FD is not closed)
Here is an example :
yume% ./ProxyFs.py `pwd`/test
yume% cd test
yume% ls
mdr
yume% echo test test
yume% ls
mdr test
yume% ps auxwww | grep python
cor 22822 0.0 0.0 43596 4696 ? Ssl 12:57 0:00 python ./ProxyFs.py /home/cor/esl/proxyfs/test
cor 22873 0.0 0.0 6352 812 pts/1 S+ 12:58 0:00 grep python
yume% ls -l /proc/22822/fd
total 0
lrwx------ 1 cor cor 64 2010-05-27 12:58 0 - /dev/null
lrwx------ 1 cor cor 64 2010-05-27 12:58 1 - /dev/null
lrwx------ 1 cor cor 64 2010-05-27 12:58 2 - /dev/null
lrwx------ 1 cor cor 64 2010-05-27 12:58 3 - /dev/fuse
l-wx------ 1 cor cor 64 2010-05-27 12:58 4 - /home/cor/test/test
yume%
Does anyone have a solution to actually really close the fds of the file I use in my fs ?
I'm pretty sure it's a mistake in the implementation of the open, read, write hooks but i'm stucked...
Let me know if you need more details !
Thanks a lot
Cor
I am trying to write a simple bash script that is listening on a port and responding with a trivial HTTP response. My specific issue is that I am not sure if the port is available and in case of bind failure I fall back to next port until bind succeeds.
So far to me the easiest way to achieve this was something like:
for (( i=$PORT_BASE; i < $(($PORT_BASE+$PORT_RANGE)); i++ ))
do
if [ $DEBUG -eq 1 ] ; then
echo trying to bind on $i
fi
/usr/bin/faucet $i --out --daemon echo test 2>/dev/null
if [ $? -eq 0 ] ; then #success?
port=$i
if [ $DEBUG -eq 1 ] ; then
echo "bound on port $port"
fi
break
fi
done
Here I am using faucet from netpipes Ubuntu package.
The problem with this is that if I simply print "test" to the output, curl complains about non-standard HTTP response (error code 18). That's fair enough as I don't print HTTP-compatible response.
If I replace echo test with echo -ne "HTTP/1.0 200 OK\r\n\r\ntest", curl still complains:
user@server:$ faucet 10020 --out --daemon echo -ne "HTTP/1.0 200 OK\r\n\r\ntest"
...
user@client:$ curl ip.of.the.server:10020
curl: (56) Failure when receiving data from the peer
I think the problem lies in how faucet is printing the response and handling the connection. For example if I do the server side in netcat, curl works fine:
user@server:$ echo -ne "HTTP/1.0 200 OK\r\n\r\ntest\r\n" | nc -l 10020
...
user@client:$ curl ip.of.the.server:10020
test
user@client:$
I would be more than happy to replace faucet with netcat in my main script, but the problem is that I want to spawn independent server process to be able to run client from the same base shell. faucet has a very handy --daemon parameter as it forks to background and I can use $? (exit status code) to check if bind succeeded. If I was to use netcat for a similar purpose, I would have to fork it using & and $? would not work.
Does anybody know why faucet isn't responding correctly in this particular case and/or can suggest a solution to this problem. I am not married neither to faucet nor netcat but would like the solution to be implemented using bash or it's utilities (as opposed to write something in yet another scripting language, such as Perl or Python).
How do I find the MAC address of a network card on IRIX? I'd rather not shell out to something that displays it and parse the output.
I'm coding C.
Methods that require root access are acceptable.
I am calling another shell script testarg.sh within my main script.
the logfiles of testarg.sh are stored in $CUSTLOGS in the below format
testarg.DDMONYY.PID.log
example: testarg.09Jun10.21165.log
In the main script after the testarg process gets completed i need to grep the log file for the text "ERROR" and "COMPLETED SUCCESSFULLY".
How do i get the PID of the process and combine with DDMONYY for grepping. Also i need to check whether file
exists before grepping
$CUSTBIN/testarg.sh
$CUSTBIN/testarg.sh
rc=$?
if [ $rc -ne 0 ]; then
return $CODE_WARN
fi
Why fork() before setsid() to daemonize a process ?
Basically, if I want to detach a process from its controlling terminal and make it a process group leader : I use setsid().
Doing this without forking before doesn't work.
Why ?
Thanks :)