Search Results

Search found 13082 results on 524 pages for 'ip camera'.

Page 257/524 | < Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >

  • My smtp server is spammed?

    - by Milos
    I have a server and the postfix client on it. Since several days, I noticed a lot of processes running there. When checked, there are a lot of emails sent. Here is an example from the mail log: Aug 18 11:54:56 mem postfix/smtpd[9963]: connect from dslb-188-096-082-167.188.096.pools.vodafone-ip.de[188.96.82.167] Aug 18 11:54:56 mem postfix/smtpd[9301]: connect from unknown[186.113.45.4] Aug 18 11:54:56 mem postfix/smtpd[9963]: 525E7114012D: client=dslb-188-096-082-167.188.096.pools.vodafone-ip.de[188.96.82.167] Aug 18 11:54:56 mem postfix/cleanup[9970]: 525E7114012D: message-id=<B55835C9027BFA9D16CCBB556DB2F48BB82DF004000480BA-db0c3ce8aa74446411898d0d2feb3001@email.filmforthoughtinc.com> Aug 18 11:54:56 mem postfix/qmgr[2581]: 525E7114012D: from=<[email protected]>, size=10702, nrcpt=1 (queue active) Aug 18 11:54:56 mem postfix/smtpd[9301]: EC52711401DC: client=unknown[186.113.45.4] Aug 18 11:54:57 mem postfix/smtpd[9963]: disconnect from dslb-188-096-082-167.188.096.pools.vodafone-ip.de[188.96.82.167] Aug 18 11:54:57 mem postfix/cleanup[8597]: EC52711401DC: message-id=<4C905D97606B436FE50C6F738DE014D9D84F2185BA815D81-1a4dbe6fc2bfcc8183f5faf901cfa15e@email.manguerasespecializadas.com> Aug 18 11:54:57 mem postfix/smtp[9971]: 525E7114012D: to=<[email protected]>, relay=mail.mdpi.com[209.237.236.228]:25, delay=1.2, delays=0.55/0/0.45/0.16, dsn=5.1.1, status=bounced (host mail.mdpi.com[209.237.236.228] said: 550 5.1.1 <[email protected]>: Recipient address rejected: mdpi.com (in reply to RCPT TO command)) Aug 18 11:54:57 mem postfix/cleanup[10067]: 8B1E11140268: message-id=<[email protected]> Aug 18 11:54:57 mem postfix/bounce[10001]: 525E7114012D: sender non-delivery notification: 8B1E11140268 Aug 18 11:54:57 mem postfix/qmgr[2581]: 8B1E11140268: from=<>, size=12693, nrcpt=1 (queue active) Aug 18 11:54:57 mem postfix/qmgr[2581]: 525E7114012D: removed Aug 18 11:54:57 mem postfix/qmgr[2581]: EC52711401DC: from=<[email protected]>, size=10978, nrcpt=1 (queue active) Aug 18 11:54:57 mem postfix/smtp[10013]: connect to aspmx.l.google.com[2607:f8b0:400d:c03::1b]:25: Network is unreachable Aug 18 11:54:57 mem postfix/smtpd[9301]: disconnect from unknown[186.113.45.4] Aug 18 11:54:58 mem postfix/smtp[10013]: 8B1E11140268: to=<[email protected]>, relay=aspmx.l.google.com[74.125.22.26]:25, delay=0.5, delays=0.06/0/0.28/0.16, dsn=5.1.1, status=bounced (host aspmx.l.google.com[74.125.22.26] said: 550-5.1.1 The email account that you tried to reach does not exist. Please try 550-5.1.1 double-checking the recipient's email address for typos or 550-5.1.1 unnecessary spaces. Learn more at 550 5.1.1 http://support.google.com/mail/bin/answer.py?answer=6596 l7si24621420qad.26 - gsmtp (in reply to RCPT TO command)) Aug 18 11:54:58 mem postfix/qmgr[2581]: 8B1E11140268: removed Aug 18 11:54:58 mem postfix/smtp[9971]: EC52711401DC: to=<[email protected]>, relay=mail.mdpi.com[209.237.236.228]:25, delay=1.2, delays=0.66/0/0.44/0.12, dsn=5.1.1, status=bounced (host mail.mdpi.com[209.237.236.228] said: 550 5.1.1 <[email protected]>: Recipient address rejected: mdpi.com (in reply to RCPT TO command)) Aug 18 11:54:58 mem postfix/cleanup[9970]: 414361140254: message-id=<[email protected]> Aug 18 11:54:58 mem postfix/bounce[10001]: EC52711401DC: sender non-delivery notification: 414361140254 Aug 18 11:54:58 mem postfix/qmgr[2581]: 414361140254: from=<>, size=13057, nrcpt=1 (queue active) Aug 18 11:54:58 mem postfix/qmgr[2581]: EC52711401DC: removed Aug 18 11:55:01 mem postfix/smtp[10002]: 414361140254: to=<[email protected]>, relay=manguerasespecializadas.com[99.198.96.210]:25, delay=2.9, delays=0.04/0/2.1/0.84, dsn=2.0.0, status=sent (250 OK id=1XJPGs-0007BE-OI) Aug 18 11:55:01 mem postfix/qmgr[2581]: 414361140254: removed IS my server attacked, spammed? How to check that? Thank you.

    Read the article

  • Weird Apache Access Logs

    - by user38480
    I see repeated requests like these in my Apache Access Logs and they have been eating up all my CPU. I have a normal WordPress installation. All i changed in the Apache Configuration was changing the DocumentRoot from /var/www/html to /var/www for both ssl and the default configuration. Also, the file referenced in the requests(updatedll.jpeg) does not exist on my server and also isn't referenced in the source code served by any page of the web application. Could this be a security threat? What are these actually and what can i do to stop them. I changed the ip address of my server. They still kept coming. Meaning that somebody is actually hitting the domain name and not the ip address. Why does my server send a 301 for these requests? Shouldn't it be sending a 404? Is it because Wordpress is installed in my root directory and the .htaccess file present for Wordpress is sending a 301 redirect? My disk access logs also seem to have high peaks intermittently. But nobody is actually accessing the site. I see no access logs except these below. Also, i see that all the requests seem to be coming from one of the following 5 ip addresses. 201.4.132.43 - - [05/Jun/2014:07:35:08 -0400] "GET /updatedll.jpg HTTP/1.1" 301 465 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/4.0; BTRS103681; GTB7.5; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; InfoPath.2; OfficeLiveConnector.1.3; OfficeLivePatch.0.0; AskTbATU3/5.15.29.67612; BRI/2)" 187.40.241.48 - - [05/Jun/2014:07:35:08 -0400] "GET /updatedll.jpg HTTP/1.1" 301 465 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; GTB7.5; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)" 186.56.134.132 - - [05/Jun/2014:07:35:10 -0400] "GET /updatedll.jpg HTTP/1.0" 301 428 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322)" 71.223.252.14 - - [05/Jun/2014:07:35:13 -0400] "GET /updatedll.jpg HTTP/1.1" 301 465 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; BTRS31756; GTB7.5; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET4.0C; .NET4.0E; InfoPath.2)" 85.245.229.167 - - [05/Jun/2014:07:35:14 -0400] "GET /updatedll.jpg HTTP/1.1" 301 465 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; Trident/7.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; MAAU; .NET4.0C; BRI/2; .NET4.0E; MAAU)"

    Read the article

  • How to use onSensorChanged sensor data in combination with OpenGL

    - by Sponge
    I have written a TestSuite to find out how to calculate the rotation angles from the data you get in SensorEventListener.onSensorChanged(). I really hope you can complete my solution to help people who will have the same problems like me. Here is the code, i think you will understand it after reading it. Feel free to change it, the main idea was to implement several methods to send the orientation angles to the opengl view or any other target which would need it. method 1 to 4 are working, they are directly sending the rotationMatrix to the OpenGl view. all other methods are not working or buggy and i hope someone knows to get them working. i think the best method would be method 5 if it would work, because it would be the easiest to understand but i'm not sure how efficient it is. the complete code isn't optimized so i recommend to not use it as it is in your project. here it is: import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.nio.FloatBuffer; import javax.microedition.khronos.egl.EGL10; import javax.microedition.khronos.egl.EGLConfig; import javax.microedition.khronos.opengles.GL10; import static javax.microedition.khronos.opengles.GL10.*; import android.app.Activity; import android.content.Context; import android.content.pm.ActivityInfo; import android.hardware.Sensor; import android.hardware.SensorEvent; import android.hardware.SensorEventListener; import android.hardware.SensorManager; import android.opengl.GLSurfaceView; import android.opengl.GLSurfaceView.Renderer; import android.os.Bundle; import android.util.Log; import android.view.WindowManager; /** * This class provides a basic demonstration of how to use the * {@link android.hardware.SensorManager SensorManager} API to draw a 3D * compass. */ public class SensorToOpenGlTests extends Activity implements Renderer, SensorEventListener { private static final boolean TRY_TRANSPOSED_VERSION = false; /* * MODUS overview: * * 1 - unbufferd data directly transfaired from the rotation matrix to the * modelview matrix * * 2 - buffered version of 1 where both acceleration and magnetometer are * buffered * * 3 - buffered version of 1 where only magnetometer is buffered * * 4 - buffered version of 1 where only acceleration is buffered * * 5 - uses the orientation sensor and sets the angles how to rotate the * camera with glrotate() * * 6 - uses the rotation matrix to calculate the angles * * 7 to 12 - every possibility how the rotationMatrix could be constructed * in SensorManager.getRotationMatrix (see * http://www.songho.ca/opengl/gl_anglestoaxes.html#anglestoaxes for all * possibilities) */ private static int MODUS = 2; private GLSurfaceView openglView; private FloatBuffer vertexBuffer; private ByteBuffer indexBuffer; private FloatBuffer colorBuffer; private SensorManager mSensorManager; private float[] rotationMatrix = new float[16]; private float[] accelGData = new float[3]; private float[] bufferedAccelGData = new float[3]; private float[] magnetData = new float[3]; private float[] bufferedMagnetData = new float[3]; private float[] orientationData = new float[3]; // private float[] mI = new float[16]; private float[] resultingAngles = new float[3]; private int mCount; final static float rad2deg = (float) (180.0f / Math.PI); private boolean mirrorOnBlueAxis = false; private boolean landscape; public SensorToOpenGlTests() { } /** Called with the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); mSensorManager = (SensorManager) getSystemService(Context.SENSOR_SERVICE); openglView = new GLSurfaceView(this); openglView.setRenderer(this); setContentView(openglView); } @Override protected void onResume() { // Ideally a game should implement onResume() and onPause() // to take appropriate action when the activity looses focus super.onResume(); openglView.onResume(); if (((WindowManager) getSystemService(WINDOW_SERVICE)) .getDefaultDisplay().getOrientation() == 1) { landscape = true; } else { landscape = false; } mSensorManager.registerListener(this, mSensorManager .getDefaultSensor(Sensor.TYPE_ACCELEROMETER), SensorManager.SENSOR_DELAY_GAME); mSensorManager.registerListener(this, mSensorManager .getDefaultSensor(Sensor.TYPE_MAGNETIC_FIELD), SensorManager.SENSOR_DELAY_GAME); mSensorManager.registerListener(this, mSensorManager .getDefaultSensor(Sensor.TYPE_ORIENTATION), SensorManager.SENSOR_DELAY_GAME); } @Override protected void onPause() { // Ideally a game should implement onResume() and onPause() // to take appropriate action when the activity looses focus super.onPause(); openglView.onPause(); mSensorManager.unregisterListener(this); } public int[] getConfigSpec() { // We want a depth buffer, don't care about the // details of the color buffer. int[] configSpec = { EGL10.EGL_DEPTH_SIZE, 16, EGL10.EGL_NONE }; return configSpec; } public void onDrawFrame(GL10 gl) { // clear screen and color buffer: gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); // set target matrix to modelview matrix: gl.glMatrixMode(GL10.GL_MODELVIEW); // init modelview matrix: gl.glLoadIdentity(); // move camera away a little bit: if ((MODUS == 1) || (MODUS == 2) || (MODUS == 3) || (MODUS == 4)) { if (landscape) { // in landscape mode first remap the rotationMatrix before using // it with glMultMatrixf: float[] result = new float[16]; SensorManager.remapCoordinateSystem(rotationMatrix, SensorManager.AXIS_Y, SensorManager.AXIS_MINUS_X, result); gl.glMultMatrixf(result, 0); } else { gl.glMultMatrixf(rotationMatrix, 0); } } else { //in all other modes do the rotation by hand: gl.glRotatef(resultingAngles[1], 1, 0, 0); gl.glRotatef(resultingAngles[2], 0, 1, 0); gl.glRotatef(resultingAngles[0], 0, 0, 1); if (mirrorOnBlueAxis) { //this is needed for mode 6 to work gl.glScalef(1, 1, -1); } } //move the axis to simulate augmented behaviour: gl.glTranslatef(0, 2, 0); // draw the 3 axis on the screen: gl.glVertexPointer(3, GL_FLOAT, 0, vertexBuffer); gl.glColorPointer(4, GL_FLOAT, 0, colorBuffer); gl.glDrawElements(GL_LINES, 6, GL_UNSIGNED_BYTE, indexBuffer); } public void onSurfaceChanged(GL10 gl, int width, int height) { gl.glViewport(0, 0, width, height); float r = (float) width / height; gl.glMatrixMode(GL10.GL_PROJECTION); gl.glLoadIdentity(); gl.glFrustumf(-r, r, -1, 1, 1, 10); } public void onSurfaceCreated(GL10 gl, EGLConfig config) { gl.glDisable(GL10.GL_DITHER); gl.glClearColor(1, 1, 1, 1); gl.glEnable(GL10.GL_CULL_FACE); gl.glShadeModel(GL10.GL_SMOOTH); gl.glEnable(GL10.GL_DEPTH_TEST); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_COLOR_ARRAY); // load the 3 axis and there colors: float vertices[] = { 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1 }; float colors[] = { 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1 }; byte indices[] = { 0, 1, 0, 2, 0, 3 }; ByteBuffer vbb; vbb = ByteBuffer.allocateDirect(vertices.length * 4); vbb.order(ByteOrder.nativeOrder()); vertexBuffer = vbb.asFloatBuffer(); vertexBuffer.put(vertices); vertexBuffer.position(0); vbb = ByteBuffer.allocateDirect(colors.length * 4); vbb.order(ByteOrder.nativeOrder()); colorBuffer = vbb.asFloatBuffer(); colorBuffer.put(colors); colorBuffer.position(0); indexBuffer = ByteBuffer.allocateDirect(indices.length); indexBuffer.put(indices); indexBuffer.position(0); } public void onAccuracyChanged(Sensor sensor, int accuracy) { } public void onSensorChanged(SensorEvent event) { // load the new values: loadNewSensorData(event); if (MODUS == 1) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); } if (MODUS == 2) { rootMeanSquareBuffer(bufferedAccelGData, accelGData); rootMeanSquareBuffer(bufferedMagnetData, magnetData); SensorManager.getRotationMatrix(rotationMatrix, null, bufferedAccelGData, bufferedMagnetData); } if (MODUS == 3) { rootMeanSquareBuffer(bufferedMagnetData, magnetData); SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, bufferedMagnetData); } if (MODUS == 4) { rootMeanSquareBuffer(bufferedAccelGData, accelGData); SensorManager.getRotationMatrix(rotationMatrix, null, bufferedAccelGData, magnetData); } if (MODUS == 5) { // this mode uses the sensor data recieved from the orientation // sensor resultingAngles = orientationData.clone(); if ((-90 > resultingAngles[1]) || (resultingAngles[1] > 90)) { resultingAngles[1] = orientationData[0]; resultingAngles[2] = orientationData[1]; resultingAngles[0] = orientationData[2]; } } if (MODUS == 6) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); final float[] anglesInRadians = new float[3]; SensorManager.getOrientation(rotationMatrix, anglesInRadians); if ((-90 < anglesInRadians[2] * rad2deg) && (anglesInRadians[2] * rad2deg < 90)) { // device camera is looking on the floor // this hemisphere is working fine mirrorOnBlueAxis = false; resultingAngles[0] = anglesInRadians[0] * rad2deg; resultingAngles[1] = anglesInRadians[1] * rad2deg; resultingAngles[2] = anglesInRadians[2] * -rad2deg; } else { mirrorOnBlueAxis = true; // device camera is looking in the sky // this hemisphere is mirrored at the blue axis resultingAngles[0] = (anglesInRadians[0] * rad2deg); resultingAngles[1] = (anglesInRadians[1] * rad2deg); resultingAngles[2] = (anglesInRadians[2] * rad2deg); } } if (MODUS == 7) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); rotationMatrix = transpose(rotationMatrix); /* * this assumes that the rotation matrices are multiplied in x y z * order Rx*Ry*Rz */ resultingAngles[2] = (float) (Math.asin(rotationMatrix[2])); final float cosB = (float) Math.cos(resultingAngles[2]); resultingAngles[2] = resultingAngles[2] * rad2deg; resultingAngles[0] = -(float) (Math.acos(rotationMatrix[0] / cosB)) * rad2deg; resultingAngles[1] = (float) (Math.acos(rotationMatrix[10] / cosB)) * rad2deg; } if (MODUS == 8) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); rotationMatrix = transpose(rotationMatrix); /* * this assumes that the rotation matrices are multiplied in z y x */ resultingAngles[2] = (float) (Math.asin(-rotationMatrix[8])); final float cosB = (float) Math.cos(resultingAngles[2]); resultingAngles[2] = resultingAngles[2] * rad2deg; resultingAngles[1] = (float) (Math.acos(rotationMatrix[9] / cosB)) * rad2deg; resultingAngles[0] = (float) (Math.asin(rotationMatrix[4] / cosB)) * rad2deg; } if (MODUS == 9) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); rotationMatrix = transpose(rotationMatrix); /* * this assumes that the rotation matrices are multiplied in z x y * * note z axis looks good at this one */ resultingAngles[1] = (float) (Math.asin(rotationMatrix[9])); final float minusCosA = -(float) Math.cos(resultingAngles[1]); resultingAngles[1] = resultingAngles[1] * rad2deg; resultingAngles[2] = (float) (Math.asin(rotationMatrix[8] / minusCosA)) * rad2deg; resultingAngles[0] = (float) (Math.asin(rotationMatrix[1] / minusCosA)) * rad2deg; } if (MODUS == 10) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); rotationMatrix = transpose(rotationMatrix); /* * this assumes that the rotation matrices are multiplied in y x z */ resultingAngles[1] = (float) (Math.asin(-rotationMatrix[6])); final float cosA = (float) Math.cos(resultingAngles[1]); resultingAngles[1] = resultingAngles[1] * rad2deg; resultingAngles[2] = (float) (Math.asin(rotationMatrix[2] / cosA)) * rad2deg; resultingAngles[0] = (float) (Math.acos(rotationMatrix[5] / cosA)) * rad2deg; } if (MODUS == 11) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); rotationMatrix = transpose(rotationMatrix); /* * this assumes that the rotation matrices are multiplied in y z x */ resultingAngles[0] = (float) (Math.asin(rotationMatrix[4])); final float cosC = (float) Math.cos(resultingAngles[0]); resultingAngles[0] = resultingAngles[0] * rad2deg; resultingAngles[2] = (float) (Math.acos(rotationMatrix[0] / cosC)) * rad2deg; resultingAngles[1] = (float) (Math.acos(rotationMatrix[5] / cosC)) * rad2deg; } if (MODUS == 12) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); rotationMatrix = transpose(rotationMatrix); /* * this assumes that the rotation matrices are multiplied in x z y */ resultingAngles[0] = (float) (Math.asin(-rotationMatrix[1])); final float cosC = (float) Math.cos(resultingAngles[0]); resultingAngles[0] = resultingAngles[0] * rad2deg; resultingAngles[2] = (float) (Math.acos(rotationMatrix[0] / cosC)) * rad2deg; resultingAngles[1] = (float) (Math.acos(rotationMatrix[5] / cosC)) * rad2deg; } logOutput(); } /** * transposes the matrix because it was transposted (inverted, but here its * the same, because its a rotation matrix) to be used for opengl * * @param source * @return */ private float[] transpose(float[] source) { final float[] result = source.clone(); if (TRY_TRANSPOSED_VERSION) { result[1] = source[4]; result[2] = source[8]; result[4] = source[1]; result[6] = source[9]; result[8] = source[2]; result[9] = source[6]; } // the other values in the matrix are not relevant for rotations return result; } private void rootMeanSquareBuffer(float[] target, float[] values) { final float amplification = 200.0f; float buffer = 20.0f; target[0] += amplification; target[1] += amplification; target[2] += amplification; values[0] += amplification; values[1] += amplification; values[2] += amplification; target[0] = (float) (Math .sqrt((target[0] * target[0] * buffer + values[0] * values[0]) / (1 + buffer))); target[1] = (float) (Math .sqrt((target[1] * target[1] * buffer + values[1] * values[1]) / (1 + buffer))); target[2] = (float) (Math .sqrt((target[2] * target[2] * buffer + values[2] * values[2]) / (1 + buffer))); target[0] -= amplification; target[1] -= amplification; target[2] -= amplification; values[0] -= amplification; values[1] -= amplification; values[2] -= amplification; } private void loadNewSensorData(SensorEvent event) { final int type = event.sensor.getType(); if (type == Sensor.TYPE_ACCELEROMETER) { accelGData = event.values.clone(); } if (type == Sensor.TYPE_MAGNETIC_FIELD) { magnetData = event.values.clone(); } if (type == Sensor.TYPE_ORIENTATION) { orientationData = event.values.clone(); } } private void logOutput() { if (mCount++ > 30) { mCount = 0; Log.d("Compass", "yaw0: " + (int) (resultingAngles[0]) + " pitch1: " + (int) (resultingAngles[1]) + " roll2: " + (int) (resultingAngles[2])); } } }

    Read the article

  • UIImagePickerController, UIImage, Memory and More!

    - by Itay
    I've noticed that there are many questions about how to handle UIImage objects, especially in conjunction with UIImagePickerController and then displaying it in a view (usually a UIImageView). Here is a collection of common questions and their answers. Feel free to edit and add your own. I obviously learnt all this information from somewhere too. Various forum posts, StackOverflow answers and my own experimenting brought me to all these solutions. Credit goes to those who posted some sample code that I've since used and modified. I don't remember who you all are - but hats off to you! How Do I Select An Image From the User's Images or From the Camera? You use UIImagePickerController. The documentation for the class gives a decent overview of how one would use it, and can be found here. Basically, you create an instance of the class, which is a modal view controller, display it, and set yourself (or some class) to be the delegate. Then you'll get notified when a user selects some form of media (movie or image in 3.0 on the 3GS), and you can do whatever you want. My Delegate Was Called - How Do I Get The Media? The delegate method signature is the following: - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info; You should put a breakpoint in the debugger to see what's in the dictionary, but you use that to extract the media. For example: UIImage* image = [info objectForKey:UIImagePickerControllerOriginalImage]; There are other keys that work as well, all in the documentation. OK, I Got The Image, But It Doesn't Have Any Geolocation Data. What gives? Unfortunately, Apple decided that we're not worthy of this information. When they load the data into the UIImage, they strip it of all the EXIF/Geolocation data. Can I Get To The Original File Representing This Image on the Disk? Nope. For security purposes, you only get the UIImage. How Can I Look At The Underlying Pixels of the UIImage? Since the UIImage is immutable, you can't look at the direct pixels. However, you can make a copy. The code to this looks something like this: UIImage* image = ...; // An image NSData* pixelData = (NSData*) CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage)); unsigned char* pixelBytes = (unsigned char *)[pixelData bytes]; // Take away the red pixel, assuming 32-bit RGBA for(int i = 0; i < [pixelData length]; i += 4) { pixelBytes[i] = 0; // red pixelBytes[i+1] = pixelBytes[i+1]; // green pixelBytes[i+2] = pixelBytes[i+2]; // blue pixelBytes[i+3] = pixelBytes[i+3]; // alpha } However, note that CGDataProviderCopyData provides you with an "immutable" reference to the data - meaning you can't change it (and you may get a BAD_ACCESS error if you do). Look at the next question if you want to see how you can modify the pixels. How Do I Modify The Pixels of the UIImage? The UIImage is immutable, meaning you can't change it. Apple posted a great article on how to get a copy of the pixels and modify them, and rather than copy and paste it here, you should just go read the article. Once you have the bitmap context as they mention in the article, you can do something similar to this to get a new UIImage with the modified pixels: CGImageRef ref = CGBitmapContextCreateImage(bitmap); UIImage* newImage = [UIImage imageWithCGImage:ref]; Do remember to release your references though, otherwise you're going to be leaking quite a bit of memory. After I Select 3 Images From The Camera, I Run Out Of Memory. Help! You have to remember that even though on disk these images take up only a few hundred kilobytes at most, that's because they're compressed as a PNG or JPG. When they are loaded into the UIImage, they become uncompressed. A quick over-the-envelope calculation would be: width x height x 4 = bytes in memory That's assuming 32-bit pixels. If you have 16-bit pixels (some JPGs are stored as RGBA-5551), then you'd replace the 4 with a 2. Now, images taken with the camera are 1600 x 1200 pixels, so let's do the math: 1600 x 1200 x 4 = 7,680,000 bytes = ~8 MB 8 MB is a lot, especially when you have a limit of around 24 MB for your application. That's why you run out of memory. OK, I Understand Why I Have No Memory. What Do I Do? There is never any reason to display images at their full resolution. The iPhone has a screen of 480 x 320 pixels, so you're just wasting space. If you find yourself in this situation, ask yourself the following question: Do I need the full resolution image? If the answer is yes, then you should save it to disk for later use. If the answer is no, then read the next part. Once you've decided what to do with the full-resolution image, then you need to create a smaller image to use for displaying. Many times you might even want several sizes for your image: a thumbnail, a full-size one for displaying, and the original full-resolution image. OK, I'm Hooked. How Do I Resize the Image? Unfortunately, there is no defined way how to resize an image. Also, it's important to note that when you resize it, you'll get a new image - you're not modifying the old one. There are a couple of methods to do the resizing. I'll present them both here, and explain the pros and cons of each. Method 1: Using UIKit + (UIImage*)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize; { // Create a graphics image context UIGraphicsBeginImageContext(newSize); // Tell the old image to draw in this new context, with the desired // new size [image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)]; // Get the new image from the context UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext(); // End the context UIGraphicsEndImageContext(); // Return the new image. return newImage; } This method is very simple, and works great. It will also deal with the UIImageOrientation for you, meaning that you don't have to care whether the camera was sideways when the picture was taken. However, this method is not thread safe, and since thumbnailing is a relatively expensive operation (approximately ~2.5s on a 3G for a 1600 x 1200 pixel image), this is very much an operation you may want to do in the background, on a separate thread. Method 2: Using CoreGraphics + (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSize:(CGSize)newSize; { CGFloat targetWidth = targetSize.width; CGFloat targetHeight = targetSize.height; CGImageRef imageRef = [sourceImage CGImage]; CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef); CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef); if (bitmapInfo == kCGImageAlphaNone) { bitmapInfo = kCGImageAlphaNoneSkipLast; } CGContextRef bitmap; if (sourceImage.imageOrientation == UIImageOrientationUp || sourceImage.imageOrientation == UIImageOrientationDown) { bitmap = CGBitmapContextCreate(NULL, targetWidth, targetHeight, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo); } else { bitmap = CGBitmapContextCreate(NULL, targetHeight, targetWidth, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo); } if (sourceImage.imageOrientation == UIImageOrientationLeft) { CGContextRotateCTM (bitmap, radians(90)); CGContextTranslateCTM (bitmap, 0, -targetHeight); } else if (sourceImage.imageOrientation == UIImageOrientationRight) { CGContextRotateCTM (bitmap, radians(-90)); CGContextTranslateCTM (bitmap, -targetWidth, 0); } else if (sourceImage.imageOrientation == UIImageOrientationUp) { // NOTHING } else if (sourceImage.imageOrientation == UIImageOrientationDown) { CGContextTranslateCTM (bitmap, targetWidth, targetHeight); CGContextRotateCTM (bitmap, radians(-180.)); } CGContextDrawImage(bitmap, CGRectMake(0, 0, targetWidth, targetHeight), imageRef); CGImageRef ref = CGBitmapContextCreateImage(bitmap); UIImage* newImage = [UIImage imageWithCGImage:ref]; CGContextRelease(bitmap); CGImageRelease(ref); return newImage; } The benefit of this method is that it is thread-safe, plus it takes care of all the small things (using correct color space and bitmap info, dealing with image orientation) that the UIKit version does. How Do I Resize and Maintain Aspect Ratio (like the AspectFill option)? It is very similar to the method above, and it looks like this: + (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSizeWithSameAspectRatio:(CGSize)targetSize; { CGSize imageSize = sourceImage.size; CGFloat width = imageSize.width; CGFloat height = imageSize.height; CGFloat targetWidth = targetSize.width; CGFloat targetHeight = targetSize.height; CGFloat scaleFactor = 0.0; CGFloat scaledWidth = targetWidth; CGFloat scaledHeight = targetHeight; CGPoint thumbnailPoint = CGPointMake(0.0,0.0); if (CGSizeEqualToSize(imageSize, targetSize) == NO) { CGFloat widthFactor = targetWidth / width; CGFloat heightFactor = targetHeight / height; if (widthFactor > heightFactor) { scaleFactor = widthFactor; // scale to fit height } else { scaleFactor = heightFactor; // scale to fit width } scaledWidth = width * scaleFactor; scaledHeight = height * scaleFactor; // center the image if (widthFactor > heightFactor) { thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5; } else if (widthFactor < heightFactor) { thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5; } } CGImageRef imageRef = [sourceImage CGImage]; CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef); CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef); if (bitmapInfo == kCGImageAlphaNone) { bitmapInfo = kCGImageAlphaNoneSkipLast; } CGContextRef bitmap; if (sourceImage.imageOrientation == UIImageOrientationUp || sourceImage.imageOrientation == UIImageOrientationDown) { bitmap = CGBitmapContextCreate(NULL, targetWidth, targetHeight, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo); } else { bitmap = CGBitmapContextCreate(NULL, targetHeight, targetWidth, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo); } // In the right or left cases, we need to switch scaledWidth and scaledHeight, // and also the thumbnail point if (sourceImage.imageOrientation == UIImageOrientationLeft) { thumbnailPoint = CGPointMake(thumbnailPoint.y, thumbnailPoint.x); CGFloat oldScaledWidth = scaledWidth; scaledWidth = scaledHeight; scaledHeight = oldScaledWidth; CGContextRotateCTM (bitmap, radians(90)); CGContextTranslateCTM (bitmap, 0, -targetHeight); } else if (sourceImage.imageOrientation == UIImageOrientationRight) { thumbnailPoint = CGPointMake(thumbnailPoint.y, thumbnailPoint.x); CGFloat oldScaledWidth = scaledWidth; scaledWidth = scaledHeight; scaledHeight = oldScaledWidth; CGContextRotateCTM (bitmap, radians(-90)); CGContextTranslateCTM (bitmap, -targetWidth, 0); } else if (sourceImage.imageOrientation == UIImageOrientationUp) { // NOTHING } else if (sourceImage.imageOrientation == UIImageOrientationDown) { CGContextTranslateCTM (bitmap, targetWidth, targetHeight); CGContextRotateCTM (bitmap, radians(-180.)); } CGContextDrawImage(bitmap, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), imageRef); CGImageRef ref = CGBitmapContextCreateImage(bitmap); UIImage* newImage = [UIImage imageWithCGImage:ref]; CGContextRelease(bitmap); CGImageRelease(ref); return newImage; } The method we employ here is to create a bitmap with the desired size, but draw an image that is actually larger, thus maintaining the aspect ratio. So We've Got Our Scaled Images - How Do I Save Them To Disk? This is pretty simple. Remember that we want to save a compressed version to disk, and not the uncompressed pixels. Apple provides two functions that help us with this (documentation is here): NSData* UIImagePNGRepresentation(UIImage *image); NSData* UIImageJPEGRepresentation (UIImage *image, CGFloat compressionQuality); And if you want to use them, you'd do something like: UIImage* myThumbnail = ...; // Get some image NSData* imageData = UIImagePNGRepresentation(myThumbnail); Now we're ready to save it to disk, which is the final step (say into the documents directory): // Give a name to the file NSString* imageName = @"MyImage.png"; // Now, we have to find the documents directory so we can save it // Note that you might want to save it elsewhere, like the cache directory, // or something similar. NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString* documentsDirectory = [paths objectAtIndex:0]; // Now we get the full path to the file NSString* fullPathToFile = [documentsDirectory stringByAppendingPathComponent:imageName]; // and then we write it out [imageData writeToFile:fullPathToFile atomically:NO]; You would repeat this for every version of the image you have. How Do I Load These Images Back Into Memory? Just look at the various UIImage initialization methods, such as +imageWithContentsOfFile: in the Apple documentation.

    Read the article

  • Raycasting "fisheye effect" question

    - by mattboy
    Continuing my exploration of raycasting, I am very confused about how the correction of the fisheye effect works. Looking at the screenshot below from the tutorial at permadi.com, the way I understand the cause of the fisheye effect is that the rays that are cast are distances from the player, rather than the distances perpendicular to the screen (or camera plane) which is what really needs to be displayed. The distance perpendicular to the screen then, in my world, should simply be the distance of Y coordinates (Py - Dy) assuming that the player is facing straight upwards. Continuing the tutorial, this is exactly how it seems to be according to the below screenshot. From my point of view, the "distorted distance" below is the same as the distance PD calculated above, and what's labelled the "correct distance" below should be the same as Py - Dy. Yet, this clearly isn't the case according to the tutorial. My question is, WHY is this not the same? How could it not be? What am I understanding and visualizing wrong here?

    Read the article

  • Older SAS1 hardware Vs. newer SAS2 hardware

    - by user12620172
    I got a question today from someone asking about the older SAS1 hardware from over a year ago that we had on the older 7x10 series. They didn't leave an email so I couldn't respond directly, but I said this blog would be blunt, frank, and open so I have no problem addressing it publicly. A quick history lesson here: When Sun first put out the 7x10 family hardware, the 7410 and 7310 used a SAS1 backend connection to a JBOD that had SATA drives in it. This JBOD was not manufactured by Sun nor did Sun own the IP for it. Now, when Oracle took over, they had a problem with that, and I really can’t blame them. The decision was made to cut off that JBOD and it’s manufacturer completely and use our own where Oracle controlled both the IP and the manufacturing. So in the summer of 2010, the cut was made, and the 7410 and 7310 had a hardware refresh and now had a SAS2 backend going to a SAS2 JBOD with SAS2 drives instead of SATA. This new hardware had two big advantages. First, there was a nice performance increase, mostly due to the faster backend. Even better, the SAS2 interface on the drives allowed for a MUCH faster failover between cluster heads, as the SATA drives were the bottleneck on the older hardware. In September of 2010 there was a major refresh of the rest of the 7000 hardware, the controllers and the other family members, and that’s where we got today’s current line-up of the 7x20 series. So the 7x20 has always used the new trays, and the 7410 and 7310 have used the new SAS2 trays since last July of 2010. Now for the bad news. People who have the 7410 and 7310 from BEFORE the July 2010 cutoff have the models with SAS1 HBAs in them to connect to the older SAS1 trays. Remember, that manufacturer cut all ties with us and stopped making the JBOD, so there’s just no way to get more of them, as they don’t exist. There are some options, however. Oracle support does support taking out the SAS1 HBAs in the old 7410 and 7310 and put in newer SAS2 HBAs which can talk to the new trays. Hey, I didn’t say it was a great option, I just said it’s an option. I fully realize that you would then have a SAS1 JBOD full of SATA drives that you could no longer connect. I do know a client that did this, and took the SAS1 JBOD and connected it to another server and formatted the drives and is using it as a plain, non-7000 JBOD. This is not supported by Oracle support. The other option is to just keep it as-is, as it works just fine, but you just can’t expand it. Then you can get a newer 7x20 series, and use the built-in ZFSSA replication feature to move the data over. Now you can use the newer one for your production data and use the older one for DR, snaps and clones.

    Read the article

  • Setup Reverse DNS with Cpanel and WHM?

    - by m3d
    I needed to set-up a reverse DNS via cpanel. I followed the steps in this tutorial but it didn't work: http://docs.cpanel.net/twiki/bin/view/11_30/WHMDocs/RdnsForBind. I use my own name servers registered with go-daddy. But I am with VPS hosting company. I did use a new serial number and exactly as the tutorial however didnt seems to be working When I check this via windows nslookup {ip-address} I still get the my hosting company name, when reversed.

    Read the article

  • How do you share your craft with non programmers?

    - by EpsilonVector
    Sometimes I feel like a musician who can't play live shows. Programming is a pretty cool skill, and a very broad world, but a lot of it happens "off camera"- in your head, in your office, away from spectators. You can of course talk about programming with other programmers, and there is peer programming, and you do get to create something that you can show to people, but when it comes to explaining to non programmers what is it that you do, or how was your day at work, it's sort of tricky. How do you get the non programmers in your life to understand what is it that you do? NOTE: this is not a repeat of Getting non-programmers to understand the development process, because that question was about managing client expectations.

    Read the article

  • How to eliminate screen jitter in Flash game?

    - by Huang F. Lei
    We made a flash game with a big screen size(1000x600), and the terrain graphic will jitter while the screen scrolling(that's, camera moving with player) continuously. What's the root cause? Or if you know how to eliminate this problem, please tell me. Any help is appreciated. UPDATE: The map's size is more bigger than screen, e.g 6000x1200. And the map has more than one layer, generally it has 3 layers. Terrain is composed by tiles. And FPS is 24. If the FPS is set to 60, things will be better. But any way, it should work well at 24 FPS. I'm not sure if it is a natural problem of flash player, because some times a terrain object(e.g a house) look like a little bit distorted.

    Read the article

  • Ask How-To Geek: Tiling Windows, iOS Remote Desktop, and Getting a Handle on Windows 7 Libraries

    - by Jason Fitzpatrick
    This week we’re taking a look at how to tile application windows in Windows 7, remote controlling your desktop from iOS devices, and understanding exactly what Windows 7 libraries are. Once a week we dip into our reader mailbag and help readers solve their problems, sharing the useful solutions with you in the process. Read on to see the fixes for this week’s reader dilemmas. Latest Features How-To Geek ETC How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin The How-To Geek Video Guide to Using Windows 7 Speech Recognition How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? Battlestar Galactica – Caprica Map of the 12 Colonies (Wallpaper Also Available) View Enlarged Versions of Thumbnail Images with Thumbnail Zoom for Firefox IntoNow Identifies Any TV Show by Sound Walk Score Calculates a Neighborhood’s Pedestrian Friendliness Factor Fantasy World at Twilight Wallpaper Hack a Wireless Doorbell into a Snail Mail Indicator

    Read the article

  • How do you share your craft with non programmers?

    - by EpsilonVector
    Sometimes I feel like a musician who can't play live shows. Programming is a pretty cool skill, and a very broad world, but a lot of it happens "off camera"- in your head, in your office, away from spectators. You can of course talk about programming with other programmers, and there is peer programming, and you do get to create something that you can show to people, but when it comes to explaining to non programmers what is it that you do, or how was your day at work, it's sort of tricky. How do you get the non programmers in your life to understand what is it that you do? NOTE: this is not a repeat of Getting non-programmers to understand the development process, because that question was about managing client expectations.

    Read the article

  • Stream Music and Video Over the Internet with Windows Media Player 12

    - by DigitalGeekery
    A new feature in Windows Media Player 12, which is included with Windows 7, is being able to stream media over the web to other Windows 7 computers.  Today we will take a look at how to set it up and what you need to begin. Note: You will need to perform this process on each computer that you want to use. What You’ll Need Two computers running Windows 7 Home Premium, Professional, or Ultimate. The host, or home computer that you will be streaming the media from, cannot be on a public network or part of domain. Windows Live ID UPnP or Port Forwarding enabled on your home router Media files added to your Windows Media Player library Windows Live ID Sign up online for a Windows Live ID if you do not already have one. See the link below for a link to Windows Live.   Configuring the Windows 7 Computers Open Windows Media Player and go to the library section. Click on Stream and then “Allow Internet access to home media.”   The Internet Home Media Access pop up window will prompt you to link your Windows Live ID to a user account. Click “Link an online ID.” If you haven’t already installed the Windows Live ID Sign-In Assistant, you will be taken to Microsoft’s website and prompted to download it. Once you have completed the Windows Live download assistant install, you will see Windows Live ID online provider appear in the “Link Online IDs” window. Click on “Link Online ID.” Next, you’ll be prompted for a Windows Live ID and password. Enter your Windows Live ID and password and click “Sign In.” A pop up window will notify you that you have successfully allowed Internet access to home media. Now, you will have to repeat the exact same configuration on the 2nd Windows 7 computer. Once you have completed the same configuration on your 2nd computer, you might also need to configure your home router for port forwarding. If your router supports UPnP, you may not need to manually forward any ports on your router. So, this would be a good time to test your connection. Go to a nearby hotspot, or perhaps a neighbor’s house, and test to see if you can stream your media. If not, you’ll need to manually forward the ports. You can always choose to forward the ports anyway, just in case. Note: We tested on a Linksys WRT54GL router, which supports UPnP, and found we still needed to manually forward the ports. Finding the ports to forward on the router Open Windows Media Player and make sure you are in Library view. Click on “Stream” on the top menu, and select “Allow Internet access to home media.”   On the “Internet Home Media Access” window, click on “Diagnose connections.” The “Internet Streaming Diagnostic Tool” will pop up. Click on “Port forwarding information” near the bottom.   On the “Port Forwarding Information” window you will find both the Internal and External Port numbers you will need to forward on your router. The Internal port number should always be 10245. The external number will be different depending on your computer. Microsoft also recommends forwarding port 443. Configuring the Router Next, you’ll need to configure Port Forwarding on your home router. We will show you the steps for a Linksys WRT54GL router, however, the steps for port forwarding will vary from router to router. On the Linksys configuration page, click on the Administration Tab along the top, click the “Applications & Gaming Tab, and then the “Port Range Forward” tab below it. Under “Application,” type in a name. It can be any name you choose. In both the “Start” and “End” boxes, type the port number. Enter the IP address of your home computer in the IP address column. Click the check box under “Enable.” Do this for both the internal and external port numbers and port 443. When finished, click the “Save Settings” button. Note: It’s highly recommended that you configure your home computer with a static IP address When you’re ready to play your media over the Internet, open up Windows Media Player and look for your host computer and username listed under “Other Libraries.” Click on it expand the list to see your media libraries. Choose a library and a file to play. Now you can enjoy your streaming media over the Internet. Conclusion We found media streaming over the Internet to work fairly well. However, we did see a loss of quality with streaming video. Also, Recorded TV .wtv and dvr-ms files did not play at all. Check out our previous article to see how to stream media share and stream media between Windows 7 computers on your home network. Similar Articles Productive Geek Tips Enable Media Streaming in Windows Home Server to Windows Media PlayerFixing When Windows Media Player Library Won’t Let You Add FilesShare Digital Media With Other Computers on a Home Network with Windows 7Share and Stream Digital Media Between Windows 7 Machines On Your Home NetworkLearning Windows 7: Manage Your Music with Windows Media Player TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Stormpulse provides slick, real time weather data Geek Parents – Did you try Parental Controls in Windows 7? Change DNS servers on the fly with DNS Jumper Live PDF Searches PDF Files and Ebooks Converting Mp4 to Mp3 Easily Use Quick Translator to Translate Text in 50 Languages (Firefox)

    Read the article

  • Ask How-To Geek: Clone a Disk, Resize Static Windows, and Create System Function Shortcuts

    - by Jason Fitzpatrick
    This week we take a look at how to clone a hard disk for easy backup or duplication, resize stubbornly static windows, and create shortcuts for dozens of Windows functions. Once a week we dip into our reader mailbag and help readers solve their problems, sharing the useful solutions with you in the process. Read on to see our fixes for this week’s reader dilemmas. Latest Features How-To Geek ETC HTG Projects: How to Create Your Own Custom Papercraft Toy How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk What is Camera Raw, and Why Would a Professional Prefer it to JPG? The How-To Geek Guide to Audio Editing: The Basics How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 ShapeShifter: What Are Dreams? [Video] This Computer Runs on Geek Power Wallpaper Bones, Clocks, and Counters; A Look at the First 35,000 Years of Computing Arctic Theme for Windows 7 Gives Your Desktop an Icy Touch Install LibreOffice via PPA and Receive Auto-Updates in Ubuntu Creative Portraits Peek Inside the Guts of Modern Electronics

    Read the article

  • Best development architecture for a small team of programmers

    - by Tio
    Hi all.. I'm in the first month of work in a new company.. and after I met the two programmer's and asked how things are organized in terms of projects inside the company, they simply shrug their shoulders, and said that nothing is organized.. I think my jaw hit the ground that same time.. ( I know some, of you think I should quit, but I'm on a privileged position, I'm the most experienced there, so there's room for me to grow inside the company, and I'm taking the high road ).. So I talked to the IT guy, and one of the programmers, and maybe this week I'm going to get a server all to myself to start organizing things. I've used various architectures in my previous work experiences, on one I was developing in a server on the network ( no source control of course ).. another experience I had was developing in my local computer, with no server on the network, just source control. And at home, I have a mix of the two, everything I code is on a server on the network, and I have those folders under source control, and I also have a no-ip account configured on that server so I can access it everywhere and I can show the clients anything. For me I think this last solution ( the one I have at home ) is the best: Network server with LAMP stack. The server as a public IP so we can access it by domain name. And use subdomains for each project. Everybody works directly on the network server. I think the problem arises, when two or more people want to work on the same project, in this case the only way to do this is by using source control and local repositories, this is great, but I think this turns development a lot more complicated. In the example I gave, to make a change to the code, I would simply need to open the file in my favorite editor, make the change, alter the database, check in the changes into source control and presto all done. Using local repositories, I would have to get the latest version, run the scripts on the local database to update it, alter the file, alter the database, check in the changes to the network server, update the database on the network server, see if everything is running well on the network server, and presto all done, to me this seems overcomplicated for a change on a simple php page. I could share the database for the local development and for the network server, that sure would help. Maybe the best way to do this is just simply: Network server with LAMP stack ( test server so to speak ), public server accessible trough the web. LAMP stack on every developer computer ( minus the database ) We develop locally, test, then check in the changes into the server test and presto. What do you think? Maybe I should start doing this at home.. Thanks and best regards...

    Read the article

  • This Week In Geek History: Steve Jobs Demos the First Mac, Mythbusters Hits the Airwaves, and Dr. Strangelove Invades Popular Culture

    - by Jason Fitzpatrick
    It was quite a wild ride for this week in Geek History: Steve Jobs gave a demonstration of the first Macintosh computer, beloved geek show MythBusters took to the air, and iconic movie Dr. Strangelove appeared in theatres and our collective consciousness. Latest Features How-To Geek ETC How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? How to Use and Master the Notoriously Difficult Pen Tool in Photoshop HTG Explains: What Are the Differences Between All Those Audio Formats? How To Use Layer Masks and Vector Masks to Remove Complex Backgrounds in Photoshop Bring Summer Back to Your Desktop with the LandscapeTheme for Chrome and Iron The Prospector – Home Dash Extension Creates a Whole New Browsing Experience in Firefox KinEmote Links Kinect to Windows Why Nobody Reads Web Site Privacy Policies [Infographic] Asian Temple in the Snow Wallpaper 10 Weird Gaming Records from the Guinness Book

    Read the article

  • Desktop Fun: Valentine’s Day Icon Packs

    - by Asian Angel
    Are you looking forward to Valentine’s Day? Then we have the perfect way for you to start customizing your desktop for the holiday with our Valentine’s Day Icon Packs collection Latest Features How-To Geek ETC How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin The How-To Geek Video Guide to Using Windows 7 Speech Recognition How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? Change Your MAC Address to Avoid Free Internet Restrictions Battlestar Galactica – Caprica Map of the 12 Colonies (Wallpaper Also Available) View Enlarged Versions of Thumbnail Images with Thumbnail Zoom for Firefox IntoNow Identifies Any TV Show by Sound Walk Score Calculates a Neighborhood’s Pedestrian Friendliness Factor Fantasy World at Twilight Wallpaper

    Read the article

  • SQL SERVER – 5 Tips for Improving Your Data with expressor Studio

    - by pinaldave
    It’s no secret that bad data leads to bad decisions and poor results.  However, how do you prevent dirty data from taking up residency in your data store?  Some might argue that it’s the responsibility of the person sending you the data.  While that may be true, in practice that will rarely hold up.  It doesn’t matter how many times you ask, you will get the data however they decide to provide it. So now you have bad data.  What constitutes bad data?  There are quite a few valid answers, for example: Invalid date values Inappropriate characters Wrong data Values that exceed a pre-set threshold While it is certainly possible to write your own scripts and custom SQL to identify and deal with these data anomalies, that effort often takes too long and becomes difficult to maintain.  Instead, leveraging an ETL tool like expressor Studio makes the data cleansing process much easier and faster.  Below are some tips for leveraging expressor to get your data into tip-top shape. Tip 1:     Build reusable data objects with embedded cleansing rules One of the new features in expressor Studio 3.2 is the ability to define constraints at the metadata level.  Using expressor’s concept of Semantic Types, you can define reusable data objects that have embedded logic such as constraints for dealing with dirty data.  Once defined, they can be saved as a shared atomic type and then re-applied to other data attributes in other schemas. As you can see in the figure above, I’ve defined a constraint on zip code.  I can then save the constraint rules I defined for zip code as a shared atomic type called zip_type for example.   The next time I get a different data source with a schema that also contains a zip code field, I can simply apply the shared atomic type (shown below) and the previously defined constraints will be automatically applied. Tip 2:     Unlock the power of regular expressions in Semantic Types Another powerful feature introduced in expressor Studio 3.2 is the option to use regular expressions as a constraint.   A regular expression is used to identify patterns within data.   The patterns could be something as simple as a date format or something much more complex such as a street address.  For example, I could define that a valid IP address should be made up of 4 numbers, each 0 to 255, and separated by a period.  So 192.168.23.123 might be a valid IP address whereas 888.777.0.123 would not be.   How can I account for this using regular expressions? A very simple regular expression that would look for any 4 sets of 3 digits separated by a period would be:  ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ Alternatively, the following would be the exact check for truly valid IP addresses as we had defined above:  ^(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])$ .  In expressor, we would enter this regular expression as a constraint like this: Here we select the corrective action to be ‘Escalate’, meaning that the expressor Dataflow operator will decide what to do.  Some of the options include rejecting the offending record, skipping it, or aborting the dataflow. Tip 3:     Email pattern expressions that might come in handy In the example schema that I am using, there’s a field for email.  Email addresses are often entered incorrectly because people are trying to avoid spam.  While there are a lot of different ways to define what constitutes a valid email address, a quick search online yields a couple of really useful regular expressions for validating email addresses: This one is short and sweet:  \b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}\b (Source: http://www.regular-expressions.info/) This one is more specific about which characters are allowed:  ^([a-zA-Z0-9_\-\.]+)@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.)|(([a-zA-Z0-9\-]+\.)+))([a-zA-Z]{2,4}|[0-9]{1,3})(\]?)$ (Source: http://regexlib.com/REDetails.aspx?regexp_id=26 ) Tip 4:     Reject “dirty data” for analysis or further processing Yet another feature introduced in expressor Studio 3.2 is the ability to reject records based on constraint violations.  To capture reject records on input, simply specify Reject Record in the Error Handling setting for the Read File operator.  Then attach a Write File operator to the reject port of the Read File operator as such: Next, in the Write File operator, you can configure the expressor operator in a similar way to the Read File.  The key difference would be that the schema needs to be derived from the upstream operator as shown below: Once configured, expressor will output rejected records to the file you specified.  In addition to the rejected records, expressor also captures some diagnostic information that will be helpful towards identifying why the record was rejected.  This makes diagnosing errors much easier! Tip 5:    Use a Filter or Transform after the initial cleansing to finish the job Sometimes you may want to predicate the data cleansing on a more complex set of conditions.  For example, I may only be interested in processing data containing males over the age of 25 in certain zip codes.  Using an expressor Filter operator, you can define the conditional logic which isolates the records of importance away from the others. Alternatively, the expressor Transform operator can be used to alter the input value via a user defined algorithm or transformation.  It also supports the use of conditional logic and data can be rejected based on constraint violations. However, the best tip I can leave you with is to not constrain your solution design approach – expressor operators can be combined in many different ways to achieve the desired results.  For example, in the expressor Dataflow below, I can post-process the reject data from the Filter which did not meet my pre-defined criteria and, if successful, Funnel it back into the flow so that it gets written to the target table. I continue to be impressed that expressor offers all this functionality as part of their FREE expressor Studio desktop ETL tool, which you can download from here.  Their Studio ETL tool is absolutely free and they are very open about saying that if you want to deploy their software on a dedicated Windows Server, you need to purchase their server software, whose pricing is posted on their website. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to Enable Desktop Notifications for Gmail in Chrome

    - by Jason Fitzpatrick
    Last year Google rolled out desktop notifications for Google Calendar, now you can get Gmail and Gchat notifications on your desktop too. Read on as we walk you through configuring them both. Chrome’s desktop notifications are clean, easy to read, and really handy for keeping an eye on what’s going on inside Gmail without keeping the browser focused on it. Setting it up is easy, grab your copy of Chrome to follow along. Latest Features How-To Geek ETC How to Recover that Photo, Picture or File You Deleted Accidentally How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin The How-To Geek Video Guide to Using Windows 7 Speech Recognition How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop What is the Internet? From the Today Show January 1994 [Historical Video] Take Screenshots and Edit Them in Chrome and Iron Using Aviary Screen Capture Run Android 3.0 on a Hacked Nook Google Art Project Takes You Inside World Famous Museums Emerald Waves and Moody Skies Wallpaper Change Your MAC Address to Avoid Free Internet Restrictions

    Read the article

  • Desktop Fun: Dreams of Hawaii Wallpaper Collection

    - by Asian Angel
    Is the winter weather wearing you down and making you wish for a tropical vacation? Until summer and vacation time gets here let our Dreams of Hawaii Wallpaper collection help you think warm and happy thoughts Latest Features How-To Geek ETC How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? How to Use and Master the Notoriously Difficult Pen Tool in Photoshop HTG Explains: What Are the Differences Between All Those Audio Formats? How To Use Layer Masks and Vector Masks to Remove Complex Backgrounds in Photoshop Enjoy Clutter-Free YouTube Video Viewing in Opera with CleanTube Bring Summer Back to Your Desktop with the LandscapeTheme for Chrome and Iron The Prospector – Home Dash Extension Creates a Whole New Browsing Experience in Firefox KinEmote Links Kinect to Windows Why Nobody Reads Web Site Privacy Policies [Infographic] Asian Temple in the Snow Wallpaper

    Read the article

  • Here’s What Would Happen if Computers Made Our Food [Comic]

    - by The Geek
    At least it’s better than getting spyware in your food. Latest Features How-To Geek ETC How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? How to Use and Master the Notoriously Difficult Pen Tool in Photoshop HTG Explains: What Are the Differences Between All Those Audio Formats? How To Use Layer Masks and Vector Masks to Remove Complex Backgrounds in Photoshop Bring Summer Back to Your Desktop with the LandscapeTheme for Chrome and Iron The Prospector – Home Dash Extension Creates a Whole New Browsing Experience in Firefox KinEmote Links Kinect to Windows Why Nobody Reads Web Site Privacy Policies [Infographic] Asian Temple in the Snow Wallpaper 10 Weird Gaming Records from the Guinness Book

    Read the article

  • network is not available even when cisco vpn client is connected. wrong route?

    - by javapowered
    I'm using Vodafone 3G modem. I've disabled other network devices in the system (ethernet, wifi, wimax) turned off firewall and antivirus. cisco vpn client connects successfully but I still can not access computer 192.168.147.120 (as well as any other computer from network). Any suggestions are welcome as I don't know what to do. ipconfig /all and route print commands (translated to english): Microsoft Windows [Version 6.1.7601] (C) Microsoft Corporation (Microsoft Corp.), 2009. All rights reserved. C: \ Users \ Oleg> ipconfig / all IP Configuration for Windows The name of the computer. . . . . . . . . : OlegPC The primary DNS-suffix. . . . . . : Node Type. . . . . . . . . . . . . : Hybrid IP-routing is enabled. . . . : No WINS-proxy enabled. . . . . . . : No Ethernet adapter Local Area Connection 4: DNS-suffix for this connection. . . . . : Description. . . . . . . . . . . . . : Cisco Systems VPN Adapter Physical Address. . . . . . . . . 00-05-9A-3C-78-00 DHCP is enabled. . . . . . . . . . . : No Autoconfiguration Enabled. . . . . . : Yes Local IPv6-address channel. . . : Fe80:: c073: 41b2: 852f: eb87% 26 (Preferred) IPv4-address. . . . . . . . . . . . : 10.53.127.204 (Preferred) The subnet mask. . . . . . . . . . : 255.0.0.0 Default Gateway. . . . . . . . . : IAID DHCPv6. . . . . . . . . . . : 536872346 DUID the client DHCPv6. . . . . . . 00-01-00-01-14-6F-4C-8D-60-EB-69-85-10-2D DNS-servers. . . . . . . . . . . : Fec0: 0:0: ffff:: 1% 1 fec0: 0:0: ffff:: 2% 1 fec0: 0:0: ffff:: 3% 1 NetBios over TCP / IP. . . . . . . . : Disabled Adapter mobile broadband connection through a broadband adapter mobile communications: DNS-suffix for this connection. . . . . : Description. . . . . . . . . . . . . : Vodafone Mobile Broadband Network Adapter (Huawei) Physical Address. . . . . . . . . 58-2C-80-13-92-63 DHCP is enabled. . . . . . . . . . . : No Autoconfiguration Enabled. . . . . . : Yes IPv4-address. . . . . . . . . . . . : 10.229.227.77 (Preferred) The subnet mask. . . . . . . . . . : 255.255.255.252 Default Gateway. . . . . . . . . : 10.229.227.78 DNS-servers. . . . . . . . . . . : 163.121.128.134 212.103.160.18 NetBios over TCP / IP. . . . . . . . : Disabled Tunnel adapter isatap. {737FF02E-D473-4F91-840E-2A4DD293FC12}: State of the environment. . . . . . . . : DNS Suffix. DNS-suffix for this connection. . . . . : Description. . . . . . . . . . . . . : Adapter Microsoft ISATAP # 3 Physical Address. . . . . . . . . 00-00-00-00-00-00-00-E0 DHCP is enabled. . . . . . . . . . . : No Autoconfiguration Enabled. . . . . . : Yes Tunnel adapter isatap. {EF585226-5B07-4446-A5A4-CB1B8E4B13AC}: State of the environment. . . . . . . . : DNS Suffix. DNS-suffix for this connection. . . . . : Description. . . . . . . . . . . . . : Adapter Microsoft ISATAP # 4 Physical Address. . . . . . . . . 00-00-00-00-00-00-00-E0 DHCP is enabled. . . . . . . . . . . : No Autoconfiguration Enabled. . . . . . : Yes Tunnel adapter Teredo Tunneling Pseudo-Interface: DNS-suffix for this connection. . . . . : Description. . . . . . . . . . . . . : Teredo Tunneling Pseudo-Interface Physical Address. . . . . . . . . 00-00-00-00-00-00-00-E0 DHCP is enabled. . . . . . . . . . . : No Autoconfiguration Enabled. . . . . . : Yes IPv6-address. . . . . . . . . . . . : 2001:0:4137:9 e76: ea: b77: f51a: 1cb2 (Basically d) Local IPv6-address channel. . . : Fe80:: ea: b77: f51a: 1cb2% 16 (Preferred) Default Gateway. . . . . . . . . ::: NetBios over TCP / IP. . . . . . . . : Disabled C: \ Users \ Oleg> route print ================================================== ========================= List of interfaces 26 ... 00 05 9a 3c 78 00 ...... Cisco Systems VPN Adapter 23 ... 58 2c 80 13 92 63 ...... Vodafone Mobile Broadband Network Adapter (Huawei) 1 ........................... Software Loopback Interface 1 19 ... 00 00 00 00 00 00 00 e0 Adapter Microsoft ISATAP # 3 20 ... 00 00 00 00 00 00 00 e0 Adapter Microsoft ISATAP # 4 16 ... 00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface ================================================== ========================= IPv4 Route Table ================================================== ========================= Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.229.227.78 10.229.227.77 296 10.0.0.0 255.0.0.0 On-link 10.53.127.204 286 10.6.93.21 255,255,255,255 10.0.0.1 10.53.127.204 100 10.13.50.12 255,255,255,255 10.0.0.1 10.53.127.204 100 10.53.8.0 255.255.252.0 10.0.0.1 10.53.127.204 100 10.53.127.204 255.255.255.255 On-link 10.53.127.204 286 10.53.128.0 255.255.248.0 10.0.0.1 10.53.127.204 100 10.53.148.0 255,255,255,240 10.0.0.1 10.53.127.204 100 10.53.148.16 255,255,255,240 10.0.0.1 10.53.127.204 100 10.229.227.76 255.255.255.252 On-link 10.229.227.77 296 10.229.227.77 255.255.255.255 On-link 10.229.227.77 296 10.229.227.79 255.255.255.255 On-link 10.229.227.77 296 10.255.255.255 255.255.255.255 On-link 10.53.127.204 286 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.147.0 255,255,255,240 10.0.0.1 10.53.127.204 100 192.168.147.96 255,255,255,240 10.0.0.1 10.53.127.204 100 192,168,147,112 255,255,255,240 10.0.0.1 10.53.127.204 100 192,168,147,128 255,255,255,240 10.0.0.1 10.53.127.204 100 192,168,147,144 255,255,255,240 10.0.0.1 10.53.127.204 100 192,168,147,224 255,255,255,240 10.0.0.1 10.53.127.204 100 192.168.214.0 255.255.255.0 10.0.0.1 10.53.127.204 100 192.168.215.0 255.255.255.0 10.0.0.1 10.53.127.204 100 194.247.133.19 255,255,255,255 10.0.0.1 10.53.127.204 100 213,247,231,194 255,255,255,255 10.229.227.78 10.229.227.77 100 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 10.229.227.77 296 224.0.0.0 240.0.0.0 On-link 10.53.127.204 286 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 10.229.227.77 296 255.255.255.255 255.255.255.255 On-link 10.53.127.204 286 ================================================== ========================= Persistent Routes: None IPv6 Route Table ================================================== ========================= Active Routes: If Metric Network Destination Gateway 16 58:: / 0 On-link 1306:: 1 / 128 On-link 16 58 2001:: / 32 On-link 16 306 2001: 0:4137:9 e76: ea: b77: f51a: 1cb2/128 On-link 16 306 fe80:: / 64 On-link 26 286 fe80:: / 64 On-link 16 306 fe80:: ea: b77: f51a: 1cb2/128 On-link 26 286 fe80:: c073: 41b2: 852f: eb87/128 On-link 1306 ff00:: / 8 On-link 16 306 ff00:: / 8 On-link 26 286 ff00:: / 8 On-link ================================================== ========================= Persistent Routes: None C: \ Users \ Oleg>

    Read the article

  • XEROX Phaser 3160N installation on UBUNTU 12.04 LTE machine

    - by Greg Verrall
    I have recently had windows XP die on one of my machines, and have installed Linux UBUNTU. The OS works great, except for installing the Xerox Phaser 3160N printer. The OS can find and install the network printer, but when I print a test page, it tells me “Internal Error – Please use the correct driver”. I have the correct drivers, as your support team have sent me the link, (http://www.support.xerox.com/support/phaser-3160/file-download/enau.html?operatingSystem=linux&fileLanguage=en_GB&contentId=105724&from=downloads&viewArchived=false) but I cannot install these drivers to run the printer. These are the instructions from the online guide for installing on a Linux machine: 1. Make sure that the machine is connected to your network and powered on. Also, your machine’s IP address should have been set. 2. Insert the supplied software CD into your CD-ROM drive. 3. Double-click CD-ROM icon that appears on your Linux desktop. 4. Double-click the Linux folder. 5. Double-click the install.sh icon. 6. The Xerox Installer window opens. Click Continue. 7. The Add printer wizard window opens. Click Next. 8. Select Network printer and click Search button. 9. The Printer’s IP address and model name appears on list field. 10. Select your machine and click Next. I get as far as step 5, and step 6 never happens, if it did, it would be very easy from there. There are options to add additional software to UBUNTU, however it does not recognise the installation CD as valid when I try to add it as a source. Any ideas on who can help me? regards, Greg Verrall

    Read the article

  • Domino Dump Turns Into Van Gogh Painting [Video]

    - by Jason Fitzpatrick
    We’ve seen a lot of domino projects in our day, but this is the first one we’ve seen that turns into a piece of classic art when it’s done. Courtesy of domino enthusiast FlippyCat: I recreated Vincent van Gogh’s “Starry Night” from just over 7,000 dominos. The second attempt took about 11 hours total to build. The first attempt failed, when I dropped a screw from the camera rig onto it. I was able to improve the swirling clouds better in the second attempt as a result though. I do not know how long the first attempt took, but I did not have any accidents building like I did in the second attempt! Next up, Edvard Munch’s The Scream? How to Use an Xbox 360 Controller On Your Windows PC Download the Official How-To Geek Trivia App for Windows 8 How to Banish Duplicate Photos with VisiPic

    Read the article

  • 24 DIY Softbox Designs for Cheap and Flexible Photography Lighting [DIY]

    - by Jason Fitzpatrick
    If you’re looking for just the right softbox for your budget and photography needs this collection of 24 great softbox designs is bound to have the perfect fit. At DIY Photography they hosted a DIY softbox contest. Out of the 70 entries they culled it down to the top 24 designs and rounded up the photo tours and build guides for you to browse. You can build the foamcore and CFL model seen in the photo above by following the build guide here. Hit up the link below to check out all the other designs that range from full body softboxes to on-camera softboxes. How To Build 24 DIY Softboxes [DIY Photography via Make] HTG Explains: Photography with Film-Based CamerasHow to Clean Your Dirty Smartphone (Without Breaking Something)What is a Histogram, and How Can I Use it to Improve My Photos?

    Read the article

< Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >