Search Results

Search found 23925 results on 957 pages for 'multiple render targets'.

Page 323/957 | < Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >

  • Dereferencing possible null pointer in java

    - by Nealio
    I am just starting to get into graphics and when I am trying to get the graphics, I get the error"Exception in thread "Thread-2" java.lang.NullPointerException" and I have no clue on what is going on! Any help is greatly appreciated. //The display class for the game //Crated: 10-30-2013 //Last Modified: 10-30-2013 package gamedev; import gamedev.Graphics.Render; import gamedev.Graphics.Screen; import java.awt.Canvas; import java.awt.Dimension; import java.awt.Graphics; import java.awt.Toolkit; import java.awt.image.BufferStrategy; import java.awt.image.BufferedImage; import java.awt.image.DataBufferInt; import javax.swing.JFrame; private void tick() { } private void render() { System.out.println("display.render"); BufferStrategy bs = this.getBufferStrategy(); if (bs == null) { createBufferStrategy(3); } for (int i = 0; i < GAMEWIDTH * GAMEHEIGHT; i++) { pixels[i] = screen.PIXELS[i]; } screen.Render(); //The line of code that is the problem Graphics g = bs.getDrawGraphics(); //end problematic code g.drawImage(img, 0, 0, GAMEWIDTH, GAMEHEIGHT, null); g.dispose(); bs.show(); }

    Read the article

  • AIX Grid Control 10.2.0.5 Communication and Monitoring Issue since 31-DEC-2010

    - by jayatheertha.rao(at)oracle.com
    Detailed symptoms for Oracle Management Server (OMS) 10.2.0.5 on AIX Oracle Management Service 10.2.0.5 instances on AIX 5L remain active and functional, but the OMS instances fail to communicate with the Grid Control Management Agents.An SSLPeerUnverified exception will be reported in the file $ORACLE_HOME/sysman/log/emoms.trc when OMS attempts to connect with an Agent:Javax.net.ssl.SSLPeerUnverifiedException: peer not authenticatedat com.sun.net.ssl.internal.ssl.SSLSessionImpl.getPeerCertificateChain(DashoA12275)at oracle.sysman.emSDK.emd.comm.EMDClient.authenticateHTTPConnection(EMDClient.java:2002)at oracle.sysman.emSDK.emd.comm.EMDClient.getConnection(EMDClient.java:1877)at oracle.sysman.emSDK.emd.comm.EMDClient.getConnection(EMDClient.java:1810)at oracle.sysman.emSDK.emd.comm.EMDClient.verifyHttpConnection(EMDClient.java:2540)at oracle.sysman.emSDK.emd.comm.EMDClient.getResponseForRequest(EMDClient.java:2323)at oracle.sysman.emSDK.emd.comm.EMDClient.getUploadManagerStatus(EMDClient.java:4853)at oracle.sysman.eml.admin.rep.emdConfig.EmdConfigTargetsData.getEmdUploadData(EmdConfigTargetsData.java:1640)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)This error may be reported when:- Accessing the Agent home page in Grid Control- Setting preferred credentials for a target monitored by the Agent- Managing metrics for a target monitored by the Agent The jobs scheduled to be run by Agents can become non-responsiveThe OMS log file $ORACLE_HOME/sysman/log/emoms.trc can show:2010-12-31 00:06:58,204 [JobWorker 430:Thread-34] DEBUG emSDK.comm getStreamResponse.4015 - oracle.sysman.emSDK.emd.comm.CommException: java.io.IOException: javax.net.ssl.SSLPeerUnverifiedException: peer not authenticatedoracle.sysman.emSDK.emd.comm.CommException: java.io.IOException: javax.net.ssl.SSLPeerUnverifiedException: peer not authenticatedat oracle.sysman.emSDK.emd.comm.EMDClient.getStreamResponse_(EMDClient.java:4088)at oracle.sysman.emSDK.emd.comm.EMDClient.getStreamResponse(EMDClient.java:4009)at oracle.sysman.emSDK.emd.comm.EMDClient.remoteOperation(EMDClient.java:3404)at oracle.sysman.emdrep.jobs.CommandManager.requestRemoteCommand(CommandManager.java:765)at oracle.sysman.emdrep.jobs.commands.RemoteOp.executeCommand(RemoteOp.java:434)at oracle.sysman.emdrep.jobs.commands.RemoteOp.executeCommand(RemoteOp.java:491)at oracle.sysman.emdrep.jobs.BaseJobWorker.runStep(BaseJobWorker.java:614)at oracle.sysman.emdrep.jobs.BaseJobWorker.doOneOperation(BaseJobWorker.java:738)at oracle.sysman.emdrep.jobs.JobWorker.doOneOperation(JobWorker.java:306)at oracle.sysman.emdrep.jobs.JobWorker.run(JobWorker.java:288)at oracle.sysman.util.threadPoolManager.WorkerThread.run(Worker.java:261) Detailed symptoms for Grid Control Management Agent 10.2.0.5 on AIX Beginning 31-DEC-2010 00:00:00, 10.2.0.5 Management Agents running on the AIX 5L operating system will fail to monitor Oracle Application Server targets. As a result, the Availability Status for the Oracle Application Server targets will be in the "Metric Error" state. NOTE: The 10.2.0.5.0 Agents would experience these errors regardless of the version/platform of the OMS.The following metric error is seen in the console for the Oracle Application Server targets monitored by a Grid Control Management Agent 10.2.0.5 installed on AIX and experiencing a Root Certificate Authority issue:Message oracle.sysman.emSDK.emd.fetchlet.FetchletException: oracle.sysman.emSDK.emd.fetchlet.FetchletException: oracle.sysman.emSDK.emd.fetchlet.FetchletException: oracle.sysman.emSDK.emd.comm.CommException: java.io.IOException: javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated The Grid Control Management Agent log file $ORACLE_HOME/sysman/log/emagentfetchlet.log (or $ORACLE_HOME/hostname/sysman/log/emagentfetchlet.log for a clustered Agent) includes the following errors:2010-12-31 00:01:03,626 [nmefmgr_getJNIFetchlet] ERROR ias.ResponseMetric getResponseMetric.154 - Unable tocompute application server statusoracle.sysman.emSDK.emd.fetchlet.FetchletException: oracle.sysman.emSDK.emd.comm.CommException: java.io.IOException: javax.net.ssl.SSLPeerUnverifiedException: peer not authenticatedat oracle.sysman.ias.ias.ResponseMetric.getResponseMetric(ResponseMetric.java:108)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:79)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:618)at oracle.sysman.emd.fetchlets.JavaWrapperFetchlet.getMetric(JavaWrapperFetchlet.java:217)at oracle.sysman.emd.fetchlets.FetchletWrapper.getMetric(FetchletWrapper.java:382) Beginning 31-DEC-2010, 10.2.0.5 Management Agents on the AIX 5L platform will fail to secure or re-secure with Oracle Management Service (OMS). This failure will cause installation of 10.2.0.5 Agents on the AIX 5L platform to fail.NOTE: The 10.2.0.5.0 Agents would experience these errors regardless of the version/platform of the OMS.The "emctl secure agent" command will fail with the following error, which will be written to the $ORACLE_HOME/sysman/log/secure.log file (or $ORACLE_HOME/hostname/sysman/log/secure.log for a clustered Agent) :2011-01-03 21:06:11,941 [main] ERROR agent.SecureAgentCmd main.207 - Failedto secure the Agent:javax.net.ssl.SSLPeerUnverifiedException: peer not authenticatedatcom.sun.net.ssl.internal.ssl.SSLSessionImpl.getPeerCertificateChain(DashoA6275)atoracle.sysman.emctl.secure.agent.SecureAgentCmd.checkUpload(SecureAgentCmd.java:478)atoracle.sysman.emctl.secure.agent.SecureAgentCmd.secureAgent(SecureAgentCmd.java:249)atoracle.sysman.emctl.secure.agent.SecureAgentCmd.main(SecureAgentCmd.java:200)  For solution, refer to AIX Grid Control 10.2.0.5 SSL Communication and Monitoring Issue since 31-DEC-2010 (Doc ID 1275070.1)

    Read the article

  • LWJGL Voxel game, glDrawArrays

    - by user22015
    I've been learning about 3D for a couple days now. I managed to create a chunk (8x8x8). Add optimization so it only renders the active and visible blocks. Then I added so it only draws the faces which don't have a neighbor. Next what I found from online research was that it is better to use glDrawArrays to increase performance. So I restarted my little project. Render an entire chunck, add optimization so it only renders active and visible blocks. But now I want to add so it only draws the visible faces while using glDrawArrays. This is giving me some trouble with calling glDrawArrays because I'm passing a wrong count parameter. > # A fatal error has been detected by the Java Runtime Environment: > # > # EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x0000000006e31a03, pid=1032, tid=3184 > # Stack: [0x00000000023a0000,0x00000000024a0000], sp=0x000000000249ef70, free space=1019k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) C [ig4icd64.dll+0xa1a03] Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) j org.lwjgl.opengl.GL11.nglDrawArrays(IIIJ)V+0 j org.lwjgl.opengl.GL11.glDrawArrays(III)V+20 j com.vox.block.Chunk.render()V+410 j com.vox.ChunkManager.render()V+30 j com.vox.Game.render()V+11 j com.vox.GameHandler.render()V+12 j com.vox.GameHandler.gameLoop()V+15 j com.vox.Main.main([Ljava/lang/StringV+13 v ~StubRoutines::call_stub public class Chunk { public final static int[] DIM = { 8, 8, 8}; public final static int CHUNK_SIZE = (DIM[0] * DIM[1] * DIM[2]); Block[][][] blocks; private int index; private int vBOVertexHandle; private int vBOColorHandle; public Chunk(int index) { this.index = index; vBOColorHandle = GL15.glGenBuffers(); vBOVertexHandle = GL15.glGenBuffers(); blocks = new Block[DIM[0]][DIM[1]][DIM[2]]; for(int x = 0; x < DIM[0]; x++){ for(int y = 0; y < DIM[1]; y++){ for(int z = 0; z < DIM[2]; z++){ blocks[x][y][z] = new Block(); } } } } public void render(){ Block curr; FloatBuffer vertexPositionData2 = BufferUtils.createFloatBuffer(CHUNK_SIZE * 6 * 12); FloatBuffer vertexColorData2 = BufferUtils.createFloatBuffer(CHUNK_SIZE * 6 * 12); int counter = 0; for(int x = 0; x < DIM[0]; x++){ for(int y = 0; y < DIM[1]; y++){ for(int z = 0; z < DIM[2]; z++){ curr = blocks[x][y][z]; boolean[] neightbours = validateNeightbours(x, y, z); if(curr.isActive() && !neightbours[6]) { float[] arr = curr.createCube((index*DIM[0]*Block.BLOCK_SIZE*2) + x*2, y*2, z*2, neightbours); counter += arr.length; vertexPositionData2.put(arr); vertexColorData2.put(createCubeVertexCol(curr.getCubeColor())); } } } } vertexPositionData2.flip(); vertexPositionData2.flip(); FloatBuffer vertexPositionData = BufferUtils.createFloatBuffer(vertexColorData2.position()); FloatBuffer vertexColorData = BufferUtils.createFloatBuffer(vertexColorData2.position()); for(int i = 0; i < vertexPositionData2.position(); i++) vertexPositionData.put(vertexPositionData2.get(i)); for(int i = 0; i < vertexColorData2.position(); i++) vertexColorData.put(vertexColorData2.get(i)); vertexColorData.flip(); vertexPositionData.flip(); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vBOVertexHandle); GL15.glBufferData(GL15.GL_ARRAY_BUFFER, vertexPositionData, GL15.GL_STATIC_DRAW); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vBOColorHandle); GL15.glBufferData(GL15.GL_ARRAY_BUFFER, vertexColorData, GL15.GL_STATIC_DRAW); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0); GL11.glPushMatrix(); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vBOVertexHandle); GL11.glVertexPointer(3, GL11.GL_FLOAT, 0, 0L); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vBOColorHandle); GL11.glColorPointer(3, GL11.GL_FLOAT, 0, 0L); System.out.println("Counter " + counter); GL11.glDrawArrays(GL11.GL_LINE_LOOP, 0, counter); GL11.glPopMatrix(); //blocks[r.nextInt(DIM[0])][2][r.nextInt(DIM[2])].setActive(false); } //Random r = new Random(); private float[] createCubeVertexCol(float[] CubeColorArray) { float[] cubeColors = new float[CubeColorArray.length * 4 * 6]; for (int i = 0; i < cubeColors.length; i++) { cubeColors[i] = CubeColorArray[i % CubeColorArray.length]; } return cubeColors; } private boolean[] validateNeightbours(int x, int y, int z) { boolean[] bools = new boolean[7]; bools[6] = true; bools[6] = bools[6] && (bools[0] = y > 0 && y < DIM[1]-1 && blocks[x][y+1][z].isActive());//top bools[6] = bools[6] && (bools[1] = y > 0 && y < DIM[1]-1 && blocks[x][y-1][z].isActive());//bottom bools[6] = bools[6] && (bools[2] = z > 0 && z < DIM[2]-1 && blocks[x][y][z+1].isActive());//front bools[6] = bools[6] && (bools[3] = z > 0 && z < DIM[2]-1 && blocks[x][y][z-1].isActive());//back bools[6] = bools[6] && (bools[4] = x > 0 && x < DIM[0]-1 && blocks[x+1][y][z].isActive());//left bools[6] = bools[6] && (bools[5] = x > 0 && x < DIM[0]-1 && blocks[x-1][y][z].isActive());//right return bools; } } public class Block { public static final float BLOCK_SIZE = 1f; public enum BlockType { Default(0), Grass(1), Dirt(2), Water(3), Stone(4), Wood(5), Sand(6), LAVA(7); int BlockID; BlockType(int i) { BlockID=i; } } private boolean active; private BlockType type; public Block() { this(BlockType.Default); } public Block(BlockType type){ active = true; this.type = type; } public float[] getCubeColor() { switch (type.BlockID) { case 1: return new float[] { 1, 1, 0 }; case 2: return new float[] { 1, 0.5f, 0 }; case 3: return new float[] { 0, 0f, 1f }; default: return new float[] {0.5f, 0.5f, 1f}; } } public float[] createCube(float x, float y, float z, boolean[] neightbours){ int counter = 0; for(boolean b : neightbours) if(!b) counter++; float[] array = new float[counter*12]; int offset = 0; if(!neightbours[0]){//top array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; } if(!neightbours[1]){//bottom array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; } if(!neightbours[2]){//front array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; } if(!neightbours[3]){//back array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; } if(!neightbours[4]){//left array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; } if(!neightbours[5]){//right array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; } return Arrays.copyOf(array, offset); } public boolean isActive() { return active; } public void setActive(boolean active) { this.active = active; } public BlockType getType() { return type; } public void setType(BlockType type) { this.type = type; } } I highlighted the code I'm concerned about in this following screenshot: - http://imageshack.us/a/img820/7606/18626782.png - (Not allowed to upload images yet) I know the code is a mess but I'm just testing stuff so I wasn't really thinking about it.

    Read the article

  • Praise for Europe's Smart Metering & Conservation Efforts

    - by caroline.yu
    Recently, a writer at the Home Energy Team praised the UK for its efforts towards smart metering and energy conservation, with an article entitled UK Blazing A Trail With Smart Metering At Home? The article highlighted that the Department of Energy and Climate Change has announced that smart metering will be introduced in the next decade and that all UK households will have smart meters by the year 2020. In fact, the UK is not the only country striving to achieve carbon reduction targets, as many of its European counterparts have begun to take positive steps towards tackling the issue of energy conservation by implementing innovative new metering and billing technologies as well as promoting alternative energy solutions, such as wind and solar power. Since 1997, the states of the European Union, including France, Germany and Spain, have been working towards achieving a target of 12 percent renewable energy electricity by 2010. Germany in particular has made a significant achievement so far, having surpassed the target early in 2007. This success is largely due to the German Renewable Energy Act (EEG), which promoted the use of renewable energy. Recently, analysis from the European Wind Energy Association (EWEA) found that 21 of the EU Member States are meeting or exceeding their national target to achieve 20 percent renewable energy by 2020. However, six states - Belgium, Italy, Luxembourg, Malta, Bulgaria and Denmark - say they will not manage to reach their target through domestic action alone. Bulgaria and Denmark believe that with fresh national initiatives they could meet or exceed their targets, but others, including Italy, may need to import renewable energy from neighboring non-EU countries. Top achievers, according to the EWEA report, are Spain, which believes its renewable energy will reach 22.7 percent by 2020, as well as Germany, Estonia, Greece, Ireland, Poland, Slovakia and Sweden, who will all exceed their targets. "Importantly, the way that this renewable energy is controlled and distributed must be addressed in order to ensure its success," said Bastian Fischer, vice president and general manager EMEA, Oracle Utilities. "A smart gird infrastructure can enable utilities to deal with load distribution in times of increased need and ensure power is always available from these means. A smart grid also underpins the success of metering and billing technologies, such as smart metering, and allows utilities to deal with increased usage data and provide accurate billing." Outside of Europe, Australia has made significant steps towards improving water conservation. The Australian Department of Sustainability and Environment took some of the recent advancements made in the energy sector, including new metering and billing solutions, and applied them to the water industry, enhancing customer service and reducing consumption as a result. The adoption of smart metering in Europe is mainly driven by regulation, but significant technological improvements are being made the world over to change the way we use all kinds of energy. However, the developing markets are lagging behind. One of the primary reasons for this is the lack of infrastructure in place to use as a foundation for setting up energy-saving solutions, which is slowing the adoption of technologies such as smart meters. However, these countries do benefit from fewer outdated infrastructure and legacy systems, which is often cited by others as a difficult barrier to deploying new solutions. As a result, some countries should find new technologies easier to implement and adapt to in the immediate future, without this roadblock.

    Read the article

  • Sharing configuration settings between Windows Azure roles

    - by theo.spears
    If you are working on a medium-large Windows Azure project it's likely it will involve more than one role, for example separate web and worker roles. Unfortunately although all the windows azure configuration settings are stored in a single cscfg file, there is no way to share configuration settings between multiple roles. This means you have to duplicate common settings like connection strings across all your roles. There is an open Connect issue about this topic, but Microsoft have not said when they will fix it. In the mean time I've put together a dirty dirty hack cunning workaround that creates a fake role containing your shared configuration settings, and copies it to all roles as part of the build process. Here's how you set it up: 1. Download the zip file attached to this post, and unzip it into the folder containing your Azure project (not your solution folder). 2. Edit your csdef and cscfg files to include the placeholder project ServiceDefinition.csdef<?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="AzureSpendNotifier" http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition%22"http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="GLOBAL"> <ConfigurationSettings> <Setting name="ExampleSetting" /> </ConfigurationSettings> </WorkerRole> <WorkerRole name="MyWorker"> <ConfigurationSettings> </ConfigurationSettings> </WorkerRole> <WebRole name="MyWeb"> <Sites> <Site name="Web"> <Bindings> <Binding name="WebEndpoint" endpointName="WebEndpoint" /> </Bindings> </Site> </Sites> <ConfigurationSettings> </ConfigurationSettings> </WebRole> </ServiceDefinition> ServiceConfiguration.cscfg<?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="AzureSpendNotifier" xmlns=http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration osFamily="1" osVersion="*"> <Role name="GLOBAL"> <ConfigurationSettings> <Setting name="ExampleSetting" value="Hello World" /> </ConfigurationSettings> <Instances count="1" /> </Role> <Role name="MyWorker"> <Instances count="1" /> <ConfigurationSettings> </ConfigurationSettings> </Role> <Role name="MyWeb"> <Instances count="1" /> <ConfigurationSettings> </ConfigurationSettings> </Role> </ServiceConfiguration> It is important that all your roles contain a ConfigurationSettings entry in both cscfg and csdef files, even if it's empty- otherwise the shared configuration settings will not be inserted. 3. Open your azure deployment (.ccproj) project in notepad, and add the highlighted line below: ... <Import Project="$(CloudExtensionsDir)Microsoft.CloudService.targets" /> <Import Project="globalsettings/globalsettings.targets" /> </Project> It is important you add this below the Microsoft.CloudService.targets import line, as it replaces some of the rules defined in that file. Visual studio will prompt you to reload the project, say yes. At this point you will have a new Azure role called 'GLOBAL' with settings you can edit through the visual studio properties panel as normal. This role will never be deployed, but any settings you add to it will be copied to all your other roles when deployed or tested locally within visual studio.

    Read the article

  • Is DataRow thread safe? How to update a single datarow in a datatable using multiple threads? - .net

    - by NLV
    Hello all I want to update a single datarow in a datatable using multiple threads. Is this actually possible? I've written the following code implementing a simple multi-threading to update a single datarow. I get different results each time. Why is it so? public partial class Form1 : Form { private static DataTable dtMain; private static string threadMsg = string.Empty; public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { Thread[] thArr = new Thread[5]; dtMain = new DataTable(); dtMain.Columns.Add("SNo"); DataRow dRow; dRow = dtMain.NewRow(); dRow["SNo"] = 5; dtMain.Rows.Add(dRow); dtMain.AcceptChanges(); ThreadStart ts = new ThreadStart(delegate { dtUpdate(); }); thArr[0] = new Thread(ts); thArr[1] = new Thread(ts); thArr[2] = new Thread(ts); thArr[3] = new Thread(ts); thArr[4] = new Thread(ts); thArr[0].Start(); thArr[1].Start(); thArr[2].Start(); thArr[3].Start(); thArr[4].Start(); while (!WaitTillAllThreadsStopped(thArr)) { Thread.Sleep(500); } foreach (Thread thread in thArr) { if (thread != null && thread.IsAlive) { thread.Abort(); } } dgvMain.DataSource = dtMain; } private void dtUpdate() { for (int i = 0; i < 1000; i++) { try { dtMain.Rows[0][0] = Convert.ToInt32(dtMain.Rows[0][0]) + 1; dtMain.AcceptChanges(); } catch { continue; } } } private bool WaitTillAllThreadsStopped(Thread[] threads) { foreach (Thread thread in threads) { if (thread != null && thread.ThreadState == ThreadState.Running) { return false; } } return true; } } Any thoughts on this? Thank you NLV

    Read the article

  • MSDeploy doesn't deploy to remote server using MSBuild and Visual Studio 2010

    - by user317762
    I'm currently running Visual Studio Team System 2010 RC and I'm trying to get the Build Service setup to build my solution and deploy 3 web applications in it. I've created a custom build configuration called Integration and I've setup the "IIS Web site/application name to use on the destination server" on the Package/Publish tab of the Properties for each of the web applications. In my Build Definition I've set the following arguments: /p:DeployOnBuild=True /p:DeployTarget=MSDeployPublish /p:MSDeployPublishMethod=InProc /p:MsDeployServiceUrl=http://my-server-name:8172/msdeploy.axd /p:EnablePackageProcessLoggingAndAssert=True However, when I run the build I get the following error, for all three web applications: Updating setAcl (RightContent). C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.targets(3481,5): error : Web deployment task failed. (Attempted to perform an unauthorized operation.) I don't think this is my actual problem though. This error is occuring after the following entry in the log: Updating setAcl This is what's causing the error message, but it appears that MSDeploy is trying to deploy to the local IIS on the Build server, not the server I specified with the MsDeployServiceUrl parameter. After looking at the targets file at C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.targets, I added the EnablePackageProcessLoggingAndAssert, which adds extra logging. The log shows an emptry string for the value of MsDeployServiceUrl. I also noticed in the target that MsDeployServiceUrl has a lowercase s, which is somewhat confusing because the task name MSDeployPublish has an uppercase S. I tried using it using uppercase, then again using lowercase, but neither worked. A couple other things to note: My build service is running as NETWORK SERVICE. The server I'm trying to deploy to is on another domain. I also tried adding /p:username=mydomain\myusername /p:password=mypassword to the MSBuild paramter list, but that didn't help. Does anyone know if I'm supplying the correct parameters? Or provide me with the correct ones? Thanks

    Read the article

  • Sometimes Xcode appears to ignore target build settings?

    - by Derek Clarkson
    Hi all, I've created a iPhone static library project with two targets like this Project -- Library (Device) target -- Library (simulator) target The device target has the SDK set to the device so it produces an armv6/7 library and the simulator target is set to the simulator SDK so it produces an i386 library. The issue I'm having is that the SDK settings on the targets keep getting overridden by the XCode active target setting. i.e. if I build the device target, but the XCode window is showing the active SDK as being the simlulator, XCode will build a simulator library instead of a device library, ignoring the settings of the target. Although it will put it into the *-iphoneos/ directory in the build directories! I originally had the same issue with another static library project, and after a lot of playing around got everything to work correctly. i.e. The targets ignore the XCode active SDK because they have their own specifications of what to build. The problem is that I don't know what made it work in that project and I have not been able to reproduce the issue in it either. Does anyone have any ideas as to what is going on? ciao Derek

    Read the article

  • Error doing an MSBuild on a CLR Storedprocedure project on Build Server

    - by CraftyFella
    Hi, When building a CLR Storedprocedure Project using MSBuild on our build server (Team City) we're getting the following error: error MSB4019: The imported project "C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\SqlServer.targets" was not found. Confirm that the path in the declaration is correct, and that the file exists on disk I've checked to see if the file exists on disk and sure enough it doesn't. I've checked on my own machine and it does exist. I don't really want to start copying over files manually to the build server. Here's the line from the csproj file which is being imported to the proj file: <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> <Import Project="$(MSBuildToolsPath)\SqlServer.targets" /> Here's the line from the proj file which is begin run by our Team City Server: <Import Project="..\$(ProjectName).csproj"/> My question is really: Where does this file comes from? Is it part of the Visual Studio install for example.. Or is there some re-distribution package somewhere to allow me to compile this project on our build server? Thanks BTW.. if i just copy the file onto the Build server it does actually work. Dave

    Read the article

  • Converting Makefile to Visual Studio Terminology Questions (First time using VS)

    - by Ukko
    I am an old Unix guy who is converting a makefile based project over to Microsoft Visual Studio, I got tasked with this because I understand the Makefile which chokes VS's automatic import tools. I am sure there is a better way than what I am doing but we are making things fit into the customer's environment and that is driving my choices. So gmake is not a valid answer, even if it is the right one ;-) I just have a couple of questions on terminology that I think will be easy for an experienced (or junior) user to answer. Presently, a make all will generate several executables and a shared library. How should I structure this? Is it one "solution" with multiple projects? There is a body of common code (say 50%) that is shared between the various executable targets that is not in a formal library, if that matters. I thought I could just set up the first executable and then add targets for the others, but that does not seem to work. I know I am working against the tool, so what is the right way? I am also using Visual C++ 2010 Express to try and do this so that may also be a problem if support for multiple targets is not supported without using Visual C++ 2010 (insert superlative). Thanks, this is really one of those questions that should be answerable by a quick chat with the resident Windows Developer at the water cooler. So, I am asking at the virtual water cooler, I also spring for a virtual frosty beverage after work.

    Read the article

  • Location redirect - is tracking possible ?

    - by Gerald Ferreira
    Hi there, I was wondering if someone can help me, I have the following script that redirect users to an affiliate link when they click on a banner. <?php $targets = array( 'site1' => 'http://www.site1.com/', 'site2' => 'http://www.site2.com/', 'site3' => 'http://www.site3.com/', 'site4' => 'http://www.site4.com/', ); if (isset($targets[$_GET['id']])) { header('Location: '.$targets[$_GET['id']]); exit; } ?> Is it possible to track when a user hits the banner telling me the referer site as well as the ip address of the person clicking on the banner. hmmmm something like pixel tracking? I have tried to add an iframe that does the tracking but it creates an error Hope it makes sense Thanks! This is more or less how I would have done it in asp <% var Command1 = Server.CreateObject ("ADODB.Command"); Command1.ActiveConnection = MM_cs_stats_STRING; Command1.CommandText = "INSERT INTO stats.g_stats (g_stats_ip, g_stats_referer) VALUES (?, ? ) "; Command1.Parameters.Append(Command1.CreateParameter("varg_stats_ip", 200, 1, 20, (String(Request.ServerVariables("REMOTE_ADDR")) != "undefined" && String(Request.ServerVariables("REMOTE_ADDR")) != "") ? String(Request.ServerVariables("REMOTE_ADDR")) : String(Command1__varg_stats_ip))); Command1.Parameters.Append(Command1.CreateParameter("varg_stats_referer", 200, 1, 255, (String(Request.ServerVariables("HTTP_REFERER")) != "undefined" && String(Request.ServerVariables("HTTP_REFERER")) != "") ? String(Request.ServerVariables("HTTP_REFERER")) : String(Command1__varg_stats_referer))); Command1.CommandType = 1; Command1.CommandTimeout = 0; Command1.Prepared = true; Command1.Execute(); %> I am not sure how to do it in php - unfortunately for me the hosting is only supporting php so I am more or less clueless on how to do it in php I was thinking if I can somehow call a picture I can do it with pixel tracking in anoter asp page, on another server. Hope this makes better sense

    Read the article

  • SQL SERVER – Fundamentals of Columnstore Index

    - by pinaldave
    There are two kind of storage in database. Row Store and Column Store. Row store does exactly as the name suggests – stores rows of data on a page – and column store stores all the data in a column on the same page. These columns are much easier to search – instead of a query searching all the data in an entire row whether the data is relevant or not, column store queries need only to search much lesser number of the columns. This means major increases in search speed and hard drive use. Additionally, the column store indexes are heavily compressed, which translates to even greater memory and faster searches. I am sure this looks very exciting and it does not mean that you convert every single index from row store to column store index. One has to understand the proper places where to use row store or column store indexes. Let us understand in this article what is the difference in Columnstore type of index. Column store indexes are run by Microsoft’s VertiPaq technology. However, all you really need to know is that this method of storing data is columns on a single page is much faster and more efficient. Creating a column store index is very easy, and you don’t have to learn new syntax to create them. You just need to specify the keyword “COLUMNSTORE” and enter the data as you normally would. Keep in mind that once you add a column store to a table, though, you cannot delete, insert or update the data – it is READ ONLY. However, since column store will be mainly used for data warehousing, this should not be a big problem. You can always use partitioning to avoid rebuilding the index. A columnstore index stores each column in a separate set of disk pages, rather than storing multiple rows per page as data traditionally has been stored. The difference between column store and row store approaches is illustrated below: In case of the row store indexes multiple pages will contain multiple rows of the columns spanning across multiple pages. In case of column store indexes multiple pages will contain multiple single columns. This will lead only the columns needed to solve a query will be fetched from disk. Additionally there is good chance that there will be redundant data in a single column which will further help to compress the data, this will have positive effect on buffer hit rate as most of the data will be in memory and due to same it will not need to be retrieved. Let us see small example of how columnstore index improves the performance of the query on a large table. As a first step let us create databaseset which is large enough to show performance impact of columnstore index. The time taken to create sample database may vary on different computer based on the resources. USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 Now let us do quick performance test. I have kept STATISTICS IO ON for measuring how much IO following queries take. In my test first I will run query which will use regular index. We will note the IO usage of the query. After that we will create columnstore index and will measure the IO of the same. -- Performance Test -- Comparing Regular Index with ColumnStore Index USE AdventureWorks GO SET STATISTICS IO ON GO -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO -- Table 'MySalesOrderDetail'. Scan count 1, logical reads 342261, physical reads 0, read-ahead reads 0. -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO -- Select Table with Columnstore Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO It is very clear from the results that query is performance extremely fast after creating ColumnStore Index. The amount of the pages it has to read to run query is drastically reduced as the column which are needed in the query are stored in the same page and query does not have to go through every single page to read those columns. If we enable execution plan and compare we can see that column store index performance way better than regular index in this case. Let us clean up the database. -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO In future posts we will see cases where Columnstore index is not appropriate solution as well few other tricks and tips of the columnstore index. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Best Practices For Database Consolidation On Exadata - New Whitepapers

    - by Javier Puerta
     Best Practices For Database Consolidation On Exadata Database Machine (Nov. 2011) Consolidation can minimize idle resources, maximize efficiency, and lower costs when you host multiple schemas, applications or databases on a target system. Consolidation is a core enabler for deploying Oracle database on public and private clouds.This paper provides the Exadata Database Machine (Exadata) consolidation best practices to setup and manage systems and applications for maximum stability and availability:Download here Oracle Exadata Database Machine Consolidation: Segregating Databases and Roles (Sep. 2011) This paper is focused on the aspects of segregating databases from each other in a platform consolidation environment on an Oracle Exadata Database Machine. Platform consolidation is the consolidation of multiple databases on to a single Oracle Exadata Database Machine. When multiple databases are consolidated on a single Database Machine, it may be necessary to isolate certain database components or functions in order to meet business requirements and provide best practices for a secure consolidation. In this paper we outline the use of Oracle Exadata database-scoped security to securely separate database management and provide a detailed case study that illustrates the best practices. Download here

    Read the article

  • Creating Hosting Accounts in WHM on a Single IP

    - by Daniel Hanly
    I've just purchased a VPS with the hope of transferring multiple shared hosting accounts onto it. The problem is that I've only got 2 IP addresses with my VPS. I can create an account and assign it an IP address, but once I've done this once, I can't do it again. (1 IP address is my main root WHM IP, the other is my new hosting account IP). Can I create multiple hosting accounts and use the same IP? How would I manage multiple hosting accounts in this way? The domain for this hosting account has been purchased by the client, and they hold it (can't transfer for 60 days), so I need to adjust the DNS settings to redirect to my newly created hosting area - how can I do this without a dedicated IP address?

    Read the article

  • Improve your Application Performance with .NET Framework 4.0

    Nice Article on CodeGuru. This processors we use today are quite different from those of just a few years ago, as most processors today provide multiple cores and/or multiple threads. With multiple cores and/or threads we need to change how we tackle problems in code. Yes we can still continue to write code to perform an action in a top down fashion to complete a task. This apprach will continue to work; however, you are not taking advantage of the extra processing power available. The best way to take advantage of the extra cores prior to .NET Framework 4.0 was to create threads and/or utilize the ThreadPool. For many developers utilizing Threads or the ThreadPool can be a little daunting. The .NET 4.0 Framework drastically simplified the process of utilizing the extra processing power through the Task Parallel Library (TPL). This article talks following topics “Data Parallelism”, “Parallel LINQ (PLINQ)” and “Task Parallelism”. span.fullpost {display:none;}

    Read the article

  • Improve your Application Performance with .NET Framework 4.0

    Nice Article on CodeGuru. This processors we use today are quite different from those of just a few years ago, as most processors today provide multiple cores and/or multiple threads. With multiple cores and/or threads we need to change how we tackle problems in code. Yes we can still continue to write code to perform an action in a top down fashion to complete a task. This apprach will continue to work; however, you are not taking advantage of the extra processing power available. The best way to take advantage of the extra cores prior to .NET Framework 4.0 was to create threads and/or utilize the ThreadPool. For many developers utilizing Threads or the ThreadPool can be a little daunting. The .NET 4.0 Framework drastically simplified the process of utilizing the extra processing power through the Task Parallel Library (TPL). This article talks following topics “Data Parallelism”, “Parallel LINQ (PLINQ)” and “Task Parallelism”. span.fullpost {display:none;}

    Read the article

  • How to Use Windows 8's Storage Spaces to Mirror & Combine Drives

    - by Chris Hoffman
    “Storage Spaces” is a new feature in Windows 8 that can combine multiple hard drives into a single virtual drive. It can mirror data across multiple drives for redundancy or combine multiple physical drives into a single pool of storage. You can even create pools of storage larger than the amount of physical storage space you have available. When the physical storage fills up, you can plug in another drive and take advantage of it with no additional configuration required. Storage Spaces is similar to RAID or LVM on Linux. The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos

    Read the article

  • How can I tweak this split-tar-gz-archive script?

    - by Sai
    I came across this shell script that can be used to split an entire directory across multiple compressed files. I am interested in making a minor tweak to this script. Lets say I have n GB of files in a directory. I do not want the contents of a particular folder to be split across multiple tar files but inside the same file. I want the n MB folder to be fit within a single compressed file and the remaining (n GB -n MB) split across multiple files. I am not sure whether it is possible with this script. I was looking forward to some suggestions. Though the script has been well documented, it is quite complex to understand for me

    Read the article

  • Caveat utilitor - Can I run two versions of Microsoft Project side-by-side?

    - by Martin Hinshelwood
    A number of out customers have asked if there are any problems in installing and running multiple versions of Microsoft Project on a single client. Although this is a case of Caveat utilitor (Let the user beware), as long as the user understands and accepts the issues that can occur then they can do this. Although Microsoft provide the ability to leave old versions of Office products (except Outlook) on your client when you are installing a new version of the product they certainly do not endorse doing so. Figure: For Project you can choose to keep the old stuff   That being the case I would have preferred that they put a “(NOT RECOMMENDED)” after the options to impart that knowledge to the rest of us, but they did not. The default and recommended behaviour is for the newer version installer to remove the older versions. Of course this does not apply in the revers. There are no forward compatibility packs for Office. There are a number of negative behaviours (or bugs) that can occur in this configuration: There is only one MS Project In Windows a file extension can only be associated with a single program.  In this case, MPP files can be associated with only one version of winproj.exe.  The executables are in different folders so if a user double-clicks a Project file on the desktop, file explorer, or Outlook email, Windows will launch the winproj.exe associated with MPP and then load the MPP file.  There are problems associated with this situation and in some cases workarounds. The user double-clicks on a Project 2010 file, Project 2007 launches but is unable to open the file because it is a newer version.  The workaround is for the user to launch Project 2010 from the Start menu then open the file.  If the file is attached to an email they will need to first drag the file to the desktop. All your linked MS Project files need to be of the same version There are a number of problems that occur when people use on Microsoft’s Object Linking and Embedding (OLE) technology.  The three common uses of OLE are: for inserted projects where a Master project contains sub-projects and each sub-project resides in its own MPP file shared resource pools where multiple MPP files share a common resource pool kept in a single MPP file cross-project links where a task or milestone in one MPP file has a  predecessor/successor relationship with a task or milestone in a different MPP file What I’ve seen happen before is that if you are running in a version of Project that is not associated with the MPP extension and then try and activate an OLE link then Project tries to launch the other version of Project.  Things start getting very confused since different MPP files are being controlled by different versions of Project running at the same time.  I haven’t tried this in awhile so I can’t give you exact symptoms but I suspect that if Project 2010 is involved the symptoms will be different then in a Project 2003/2007 scenario.  I’ve noticed that Project 2010 gives different error messages for the exact same problem when it occurs in Project 2003 or 2007.  -Anonymous The recommendation would be either not to use this feature if you have to have multiple versions of Project installed or to use only a single version of Project. You may get unexpected negative behaviours if you are using shared resource pools or resource pools even when you are not running multiple versions as I have found that they can get broken very easily. If you need these thing then it is probably best to use Project Server as it was created to solve many of these specific issues. Note: I would not even allow multiple people to access a network copy of a Project file because of the way Windows locks files in write mode. This can cause write-locks that get so bad a server restart is required I’ve seen user’s files get write-locked to the point where the only resolution is to reboot the server. Changing the default version to run for an extension So what if you want to change the default association from Project 2007 to Project 2010?   Figure: “Control Panel | Folder Options | Change the file associated with a file extension” Windows normally only lists the last version installed for a particular extension. You can select a specific version by selecting the program you want to change and clicking “Change program… | Browse…” and then selecting the .exe you want to use on the file system. Figure: You will need to select the exact version of “winproj.exe” that you want to run Conclusion Although it is possible to run multiple versions of Project on one system in the main it does not really make sense.

    Read the article

  • Is true multithreading really necessary?

    - by Jonathan Graef
    So yeah, I'm creating a programming language. And the language allows multiple threads. But, all threads are synchronized with a global interpreter lock, which means only one thread is allowed to execute at a time. The only way to get the threads to switch off is to explicitly tell the current thread to wait, which allows another thread to execute. Parallel processing is of course possible by spawning multiple processes, but the variables and objects in one process cannot be accessed from another. However the language does have a fairly efficient IPC interface for communicating between processes. My question is: Would there ever be a reason to have multiple, unsynchronized threads within a single process (thus circumventing the GIL)? Why not just put thread.wait() statements in key positions in the program logic (presuming thread.wait() isn't a CPU hog, of course)? I understand that certain other languages that use a GIL have processor scheduling issues (cough Python), but they have all been resolved.

    Read the article

  • Good architecture for user information on separate databases?

    - by James P. Wright
    I need to write an API to connect to an existing SQL database. The API will be written in ASP.Net MVC3. The slight problem is that with existing users of the system, they may have a username on multiple databases. Each company using the product gets a brand new instance of the database, but over the years (the system has been running for 10 years) there are quite a few users (hundreds) who have multiple usernames across multiple "companies" (things got fragmented obviously and sometimes a single Company has 5 "projects" that each have their own database). Long story short, I need to be able to have a single unified user login that will allow existing users to access their information across all their projects. The only thing I can think is storing a bunch of connection strings, but that feels like a really bad idea. I'll have a new Database that will hold the "unified user" information...can anyone suggest a solid system architecture that can handle a setup like this?

    Read the article

  • Mark Hurd on the Customer Revolution: Oracle's Top 10 Insights

    - by Richard Lefebvre
    Reprint of an article from Forbes Businesses that fail to focus on customer experience will hear a giant sucking sound from their vanishing profitability. Because in today’s dynamic global marketplace, consumers now hold the power in the buyer-seller equation, and sellers need to revamp their strategy for this new world order. The ability to relentlessly deliver connected, personalized and rewarding customer experiences is rapidly becoming one of the primary sources of competitive advantage in today’s dynamic global marketplace. And the inability or unwillingness to realize that the customer is a company’s most important asset will lead, inevitably, to decline and failure. Welcome to the lifecycle of customer experience, in which consumers explore, engage, shop, buy, ask, compare, complain, socialize, exchange, and more across multiple channels with the unconditional expectation that each of those interactions will be completed in an efficient and personalized manner however, wherever, and whenever the customer wants. While many niche companies are offering point solutions within that sprawling and complex spectrum of needs and requirements, businesses looking to deliver superb customer experiences are still left having to do multiple product evaluations, multiple contract negotiations, multiple test projects, multiple deployments, and–perhaps most annoying of all–multiple and never-ending integration projects to string together all those niche products from all those niche vendors. With its new suite of customer-experience solutions, Oracle believes it can help companies unravel these challenges and move at the speed of their customers, anticipating their needs and desires and creating enduring and profitable relationships. Those solutions span the full range of marketing, selling, commerce, service, listening/insights, and social and collaboration tools for employees. When Oracle launched its suite of Customer Experience solutions at a recent event in New York City, president Mark Hurd analyzed the customer experience revolution taking place and presented Oracle’s strategy for empowering companies to capitalize on this important market shift. From Hurd’s presentation and related materials, I’ve extracted a list of Hurd’s Top 10 Insights into the Customer Revolution. 1. Please Don’t Feed the Competitor’s Pipeline!After enduring a poor experience, 89% of consumers say they would immediately take their business to your competitor. (Except where noted, the source for these findings is the 2011 Customer Experience Impact (CEI) Report including a survey commissioned by RightNow (acquired by Oracle in March 2012) and conducted by Harris Interactive.) 2. The Addressable Market Is Massive. Only 1% of consumers say their expectations were always met by their actual experiences. 3. They’re Willing to Pay More! In return for a great experience, 86% of consumers say they’ll pay up to 25% more. 4. The Social Media Microphone Is Always Live. After suffering through a poor experience, more than 25% of consumers said they posted a negative comment on Twitter or Facebook or other social media sites. Conversely, of those consumers who got a response after complaining, 22% posted positive comments about the company. 5.  The New Deal Is Never Done: Embrace the Entire Customer Lifecycle. An appropriately active and engaged relationship, says Hurd, extends across every step of the entire processs: need, research, select, purchase, receive, use, maintain, and recommend. 6. The 360-Degree Commitment. Customers want to do business with companies that actively and openly demonstrate the desire to establish strong and seamless connections across employees, the company, and the customer, says research firm Temkin Group in its report called “The CX Competencies.” 7. Understand the Emotional Drivers Behind Brand Love. What makes consumers fall in love with a brand? Among the top factors are friendly employees and customer reps (73%), easy access to information and support (55%), and personalized experiences, such as when companies know precisely what products or services customers have purchased in the past and what issues those customers have raised (36%). 8.  The Importance of Immediate Action. You’ve got one week to respond–and then the opportunity’s lost. If your company needs more than a week to answer a prospect’s question or request, most of those prospects will terminate the relationship. 9.  Want More Revenue, Less Churn, and More Referrals? Then improve the overall customer experience: Forrester’s research says that approach put an extra $900 million in the pockets of wireless service providers, $800 million for hotels, and $400 million for airlines. 10. The Formula for CX Success.  Hurd says it includes three elegantly interlaced factors: Connected Engagement, to personalize the experience; Actionable Insight, to maximize the engagement; and Optimized Execution, to deliver on the promise of value. RECOMMENDED READING: The Top 10 Strategic CIO Issues For 2013 Wal-Mart, Amazon, eBay: Who’s the Speed King of Retail? Career Suicide and the CIO: 4 Deadly New Threats Memo to Marc Benioff: Social Is a Tool, Not an App

    Read the article

  • Webcor Builders Coordinates Construction Schedules and Mitigates Potential Delays More Efficiently with Integrated Project Management

    - by Sylvie MacKenzie, PMP
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} With more than 40 years of commercial construction experience, Webcor Builders is a leading builder of distinguished, high-profile projects, including high-rise condominiums and hotels, laboratories, healthcare centers, and public works projects. Webcor is also known for its award-winning concrete, interior construction, historic restoration, and seismic renovation work. The company has completed more than 50 million square feet of projects to date. Considering the variety and complexity of the construction projects Webcor undertakes, an integrated project management solution is critical to ensuring optimal efficiency and completing client projects on time and on budget. The company previously used a number of scheduling systems for its various building projects. These packages provided different levels of schedule detail and required schedulers, engineers, and other employees to learn multiple systems. From an IT cost and complexity perspective, the company had to manage multiple scheduling systems and pay for multiple sets of licenses. The company looked to standardize on an enterprise project management system, and selected Oracle’s Primavera P6 Enterprise Project Portfolio Management. Webcor uses the solution’s advanced capabilities to schedule complex projects, analyze delays, model and propose multiple scenarios to demonstrate and mitigate delays and cost overruns, and process that information efficiently to deliver the scheduling precision that public and private projects require. In fact, the solution was instrumental in helping the company’s expansion into public sector projects during the recent economic downturn, and with Primavera P6 in place, it can deliver the precise schedule reporting required for large public projects. With Primavera P6 in place, the company could deliver the precise scheduling and milestone reporting capabilities required for large public projects. The solution is in managing the high-profile University of California – Berkeley Memorial Stadium project. Webcor was hired as construction manager and general contractor for the stadium renovation project, which is a fast-paced project located near the seismically active Hayward Fault Zone. Due to the University of California’s football schedule, meeting the Universities deadline for the coming season placed Webcor in a situation where risk awareness and early warnings of issues would be paramount. Webcor and the extended project team needed a solution that could instantly analyze alternate scenarios to mitigate potential delays; Primavera would deliver those answers.The team would also need to enable multiple stakeholders to use an internet-based platform to access the schedule from various locations, and model complicated sequencing requirements where swift decisions would be made to keep the project on track. The schedule is an integral part of Webcor’s construction management process for the stadium project. Rather than providing the client with the industry-standard monthly update, Webcor updates the critical path method (CPM) schedule on a weekly basis. The project team also reviews the schedule and updates weekly to confirm that progress and forecasted performance are accurate. Hired by the University for their ability to deliver in high risk environments The Webcor team was hit recently with a design supplement that could have added up to 70 days to the project. Using Oracle Primavera P6 the team sprung into action analyzing multiple “what if” scenarios to review mitigation means and methods.  Determined to make sure the Bears could take the field in the coming season the project team nearly eliminated the impact with their creative analysis in working the schedule. The total time from the issuance of the final design supplement to an agreed mitigation response was less than one week; leveraging the Oracle Primavera solution Webcor was able to deliver superior customer value With the ability to efficiently manage projects and schedules, Webcor can ensure it completes its projects on time and on budget, as well as inform clients about what changes to plans will mean in terms of delays and additional costs. Read the complete customer case study at :  http://www.oracle.com/us/corporate/customers/customersearch/webcor-builders-1-primavera-ss-1639886.html

    Read the article

  • SQL Server 2008: Table Valued Parameters

    In SQL Server 2005 and earlier, it is not possible to pass a table variable as a parameter to a stored procedure. When multiple rows of data to SQL Server need to send multiple rows of data to SQL Server, developers either had to send one row at a time or come up with other workarounds to meet requirements. While a VB.Net developer recently informed me that there is a SQLBulkCopy object available in .Net to send multiple rows of data to SQL Server at once, the data still can not be passed to a stored proc.Possibly the most anticipated T-SQL feature of SQL Server 2008 is the new Table-Valued Parameters. This is the ability to easily pass a table to a stored procedure from T-SQL code or from an application as a parameter.

    Read the article

  • Is it possible to export Windows event logs from multiple servers to a non-windows host, without running event manager on each of the Windows servers?

    - by Taylor Matyasz
    I want to export event logs from Windows to a non-Windows host. I was considering using Logstash, but that would seem to require that I install and run Logstash on each server. Is it possible to do this without having to run it on all of the servers? I am hoping to be able to consolidate all of the information from different servers to make searching and reporting much easier. If not, what would you recommend is the best way to export to a non-Windows host in real time? Thank you.

    Read the article

< Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >