Search Results

Search found 13928 results on 558 pages for 'large scale nat'.

Page 116/558 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • wordpress image showing up multiple times

    - by JCHASE11
    I am writing a wordpress theme, and have run into a fairly basic problem. By default, when you inset an image into the post, it displays that image at the size you specify, on both the homepage and the single post. I would like to have different sized images, displaying a thumbnail on the homepage, and a full sized image when you click through to the post. I am using wordpress 2.9's new thumbnail feature, which has created great thumbnails for the homepage. But now, I am stuck with a nice thumbnail next to a large photo (on the index/home page). On the single page, it is displaying correctly with just the large picture. Basically, I need to know how to tell wordpress to only display the large post image on the single page, not on the index. Also: I have used the timthumb script, but I think the answer is far more basic then needing plugins or scripts Thanks!

    Read the article

  • Which source control to use (for a lot of images file)?

    - by invisal
    I finally decide to give a source control a try with my existed project (since I will be hiring another new developer soon). I am pretty new to this area and I need recommendation which Source Control should I use to fix my current project. I am developing Web Application which dealing with large number of pictures. Currently, we have over 500,000 pictures (large size picture and several thumbnails). I am using PHP which is not what I concern (since it is just only hundred of script files). My major concern is with the large amount of picture. NOTE: I just install VisualSVN. What do you think about SVN. Do you think it will fix to my requirement?

    Read the article

  • Speed-up of readonly MyISAM table

    - by Ozzy
    We have a large MyISAM table that is used to archive old data. This archiving is performed every month, and except from these occasions data is never written to the table. Is there anyway to "tell" MySQL that this table is read-only, so that MySQL might optimize the performance of reads from this table? I've looked at the MEMORY storage engine, but the problem is that this table is so large that it would take a large portion of the servers memory, which I don't want. Hope my question is clear enough, I'm a novice when it comes to db administration so any input or suggestions are welcome.

    Read the article

  • Looking for alternatives to the database project.

    - by Dave
    I've a fairly large database project which contains nine databases and one database with a fairly large schema. This project takes a large amount of time to build and I'm about to pull my hair out. We'd like to keep our database source controlled, but having a hard getting the other devs to use the project and build the database project before checking in just because it takes so long to build. It is seriously crippling our work so I'm look for alternatives. Maybe something can be done with Redgate's SQL Compare? I think maybe the only drawback here is that it doesn't validate syntax? Anyone's thoughts/suggestions would be most appreciated.

    Read the article

  • What arguments do I send a function being called by a button in python?

    - by Jared
    I have a UI, in that UI is 4 text fields and 1 int field, then I have a function that calls to another function based on what's inside of the text fields, this function has (self, *args). My function that is being called to takes five arguments and I don't know what to put in it to make it actually work with my UI because python button's send an argument of their own. I have tried self and *args, but it doesn't work. Here is my code, didn't include most of the UI code since it is self explanatory: def crBC(self, IKJoint, FKJoint, bindJoint, xQuan, switch): ''' You should have a controller with an attribute 'ikFkBlend' - The name can be changed after the script executes. Controller should contain an enum - FK/DYN(0), IK(1). Specify the IK joint, then either the dynamic or FK joint, then the bind joint. Then a quantity of joints to pass through and connect. Tested currently on 600 joints (200 x 3), executed in less than a second. Returns nothing. Please open your script editor for details. ''' import itertools # gets children joints of the selected joint chHipIK = cmds.listRelatives(IKJoint, ad = True, type = 'joint') chHipFK = cmds.listRelatives(FKJoint, ad = True, type = 'joint') chHipBind = cmds.listRelatives(bindJoint, ad = True, type = 'joint') # list is built backwards, this reverses the list chHipIK.reverse() chHipFK.reverse() chHipBind.reverse() # appends the initial joint to the list chHipIK.append(IKJoint) chHipFK.append(FKJoint) chHipBind.append(bindJoint) # puts the last joint at the start of the list because the initial joint # was added to the end chHipIK.insert(0, chHipIK.pop()) chHipFK.insert(0, chHipFK.pop()) chHipBind.insert(0, chHipBind.pop()) # pops off the remaining joints in the list the user does not wish to be blended chHipBind[xQuan:] = [] chHipIK[xQuan:] = [] chHipFK[xQuan:] = [] # goes through the bind joints, makes a blend colors for each one, connects # the switch to the blender for a, b, c in itertools.izip(chHipBind, chHipIK, chHipFK): rotBC = cmds.shadingNode('blendColors', asUtility = True, n = a + 'rotate_BC') tranBC = cmds.shadingNode('blendColors', asUtility = True, n = a + 'tran_BC') scaleBC = cmds.shadingNode('blendColors', asUtility = True, n = a + 'scale_BC') cmds.connectAttr(switch + '.ikFkSwitch', rotBC + '.blender') cmds.connectAttr(switch + '.ikFkSwitch', tranBC + '.blender') cmds.connectAttr(switch + '.ikFkSwitch', scaleBC + '.blender') # goes through the ik joints, connects to the blend colors cmds.connectAttr(b + '.rotate', rotBC + '.color1', force = True) cmds.connectAttr(b + '.translate', tranBC + '.color1', force = True) cmds.connectAttr(b + '.scale', scaleBC + '.color1', force = True) # connects FK joints to the blend colors cmds.connectAttr(c + '.rotate', rotBC + '.color2') cmds.connectAttr(c + '.translate', tranBC + '.color2') cmds.connectAttr(c + '.scale', scaleBC + '.color2') # connects blend colors to bind joints cmds.connectAttr(rotBC + '.output', a + '.rotate') cmds.connectAttr(tranBC + '.output', a + '.translate') cmds.connectAttr(scaleBC + '.output', a + '.scale') ------------------- def execCrBC(self, *args): g.crBC(cmds.textField(self.ikJBC, q = True, tx = True), cmds.textField(self.fkJBC, q = True, tx = True), cmds.textField(self.bindJBC, q = True, tx = True), cmds.intField(self.bQBC, q = True, v = True), cmds.textField(self.sCBC, q = True, tx = True)) ------------------- self.bQBC = cmds.intField() cmds.text(l = '') self.sCBC = cmds.textField() cmds.text(l = '') cmds.button(l = 'Help Docs', c = self.crBC.__doc__) cmds.setParent('..') cmds.button(l = 'Create', c = self.execCrBC) Here is the code causing the problem as requested: import maya.cmds as cmds import jtRigUI.createDummyRig as dum import jtRigUI.createSkeleton as sk import jtRigUI.generalUtilities as gu import jtRigUI.createLegRig as lr import jtRigUI.createArmRig as ar class RUI(dum.Dict, dum.Dummy, sk.Skel, sk.FiSkel, lr.LeanLocs, lr.LegRig, ar.ArmRig, gu.Gutils): def __init__(self, charNameUI, gScaleUI, fingButtonGrp, thumbCheckBox, spineButtonGrp, neckButtonGrp, ikJBC, fkJBC, bindJBC, bQBC, sCBC): rigUI = 'rigUI' if cmds.window(rigUI, exists = True): cmds.deleteUI(rigUI) rigUI = cmds.window(rigUI, t = 'JT Rigging UI', sizeable = False, tb = True, mnb = False, mxb = False, menuBar = True, tlb = True, nm = 5) form = cmds.formLayout() tabs = cmds.tabLayout(innerMarginWidth = 1, innerMarginHeight = 1) rigUIMenu = cmds.menu('Help', hm = True) aboutMenu = cmds.menuItem('about') cmds.popupMenu('about', button = 1) deleteUIMenu = cmds.menu('Delete', hm = True) cmds.menuItem('dummySkeleton') cmds.formLayout(form, edit = True, attachForm = ((tabs, 'top', 0), (tabs, 'left', 0), (tabs, 'bottom', 0), (tabs, 'right', 0)), w = 30) tab1 = cmds.rowColumnLayout('Dummy') #cmds.columnLayout(rowSpacing = 10) #cmds.setParent('..') cmds.frameLayout(l = 'A: Dummy Skeleton Setup', w = 400) self.charNameUI = cmds.textFieldGrp (label="Optional Character Name:", ann="Insert a name for the character or leave empty.", tx = '', w = 1) fingJUI = cmds.frameLayout(l = 'B: Number of Fingers', w = 10) cmds.text('\n', h = 5) self.fingButtonGrp = cmds.radioButtonGrp('fingRadio', p = fingJUI, l = 'Fingers: ', sl = 4, w = 1, numberOfRadioButtons = 4, labelArray4 = ['One', 'Two', 'Three', 'Four'], ct2 = ('left', 'left'), cw5 = [60,60,60,60,60]) self.thumbCheckBox = cmds.checkBoxGrp(l = 'Thumb: ', v1 = True) cmds.text('\n', h = 5) spineJUI = cmds.frameLayout(l = 'C: Number of Spine Joints') cmds.text('\n', h = 5) self.spineButtonGrp = cmds.radioButtonGrp('spineRadio', p = spineJUI, l = 'Spine Joints: ', sl = 2, w = 1, numberOfRadioButtons = 3, labelArray3 = ['Three', 'Five', 'Ten'], ct2 = ('left', 'left'), cw4 = [95,95,95,95]) cmds.text('\n', h = 5) neckJUI = cmds.frameLayout(l = 'D: Number of Neck Joints') cmds.text('\n', h = 5) self.neckButtonGrp = cmds.radioButtonGrp('neckRadio', p = neckJUI, l = 'Neck Joints: ', sl = 0, w = 1, numberOfRadioButtons = 3, labelArray3 = ['Two', 'Three', 'Four'], ct2 = ('left', 'left'), cw4 = [95,95,95,95]) cmds.text('\n', h = 5) cmds.setParent('..') cmds.setParent('..') cmds.setParent('..') cmds.frameLayout('E: Creation') cmds.text('SAVE FIRST: CAN NOT UNDO', bgc = (0.2,0.2,0.2)) cmds.button(l = '\nCreate Dummy Skeleton\n', c = self.build) # also have it make char name field grey cmds.text('Elbows and Knees must have bend.', bgc = (0.2,0.2,0.2)) cmds.columnLayout() cmds.setParent('..') cmds.setParent('..') cmds.setParent('..') cmds.setParent('..') tab2 = cmds.rowColumnLayout('Skeleton') cmds.columnLayout(columnAttach = ('both', 5), rowSpacing = 10, columnWidth = 150) cmds.setParent('..') cmds.frameLayout(l = 'A: Skeleton Setup') cmds.text('SAVE FIRST: CAN NOT UNDO', bgc = (0.2,0.2,0.2)) cmds.button(l = '\nConvert to Skeleton - Orient - Set LRA\n', c = self.buildSkel) self.gScaleUI = cmds.textFieldGrp (label="Scale Multiplier:", ann="Scale multipler of Character: basis for all further base controllers", tx = '1.0', w = 1, ed = False, en = False, visible = True) cmds.frameLayout('B: Manual Orientation') cmds.text('You must manually check finger, thumb, leg, foot orientation specifically.\nConfirm rest of joints.\nSpine: X aim, Y point backwards from spine, Z to the side.\nFingers: X is aim, Y points upwards, Z to the side - Spread on Y, curl on Z.\nFoot: Pivots on Y, rolls on Z, leans on X.') cmds.columnLayout() cmds.setParent('..') cmds.frameLayout('C: Finalize Creation of Skeleton') cmds.button(l = '\nFinalize Skeleton\n', c = self.finishS) cmds.setParent('..') cmds.setParent('..') cmds.setParent('..') cmds.setParent('..') tab3 = cmds.rowColumnLayout('Legs') cmds.columnLayout(columnAttach = ('both', 5), rowSpacing = 10, columnWidth = 150) cmds.setParent('..') cmds.frameLayout(l = 'A: Leg Rig Setup') cmds.button(l = '\nGenerate Foot Lean Locators\n', c = self.makeLean) cmds.text('Place on either side of the foot.\nDo not rotate: Automatic orientation in place.') cmds.frameLayout(l = 'B: Rig Legs') cmds.button(l = '\nRig Legs\n', c = self.makeLegs) cmds.setParent('..') cmds.setParent('..') cmds.setParent('..') tab4 = cmds.rowColumnLayout('Arms') cmds.columnLayout(columnAttach = ('both', 5), rowSpacing = 10, columnWidth = 150) cmds.setParent('..') cmds.frameLayout(l = 'A: Arm Rig Setup') cmds.button(l = '\nA: Rig Arms\n', c = self.makeArms) cmds.setParent('..') cmds.setParent('..') tab5 = cmds.rowColumnLayout('Spine and Head') cmds.columnLayout(columnAttach = ('both', 5), rowSpacing = 10, columnWidth = 150) cmds.setParent('..') cmds.frameLayout(l = 'Spine Rig Setup') cmds.setParent('..') cmds.setParent('..') tab6 = cmds.rowColumnLayout('Stretchy IK') cmds.columnLayout(columnAttach = ('both', 5), rowSpacing = 10, columnWidth = 150) cmds.setParent('..') cmds.frameLayout(l = 'Stretchy Setup') cmds.setParent('..') cmds.setParent('..') tab6 = cmds.rowColumnLayout('Extras') cmds.scrollLayout(saw = 600, sah = 600, cr = True) cmds.columnLayout(columnAttach = ('both', 5), rowSpacing = 10, columnWidth = 150) cmds.setParent('..') cmds.frameLayout(l = 'General Utitlities') cmds.text('\nHere are all my general utilities for various things') cmds.frameLayout(l = 'Automatic Blend Colors Creation and Connection') cmds.rowColumnLayout(nc = 5, w = 10) cmds.text('IK Joint:') cmds.text(l = '') cmds.text('FK/Dyn Joint:') cmds.text(l = '') cmds.text('Bind Joint:') self.ikJBC = cmds.textField() cmds.text(l = '') self.fkJBC = cmds.textField() cmds.text(l = '') self.bindJBC = cmds.textField() cmds.text(' \nBlend Quantity:') cmds.text(l = '') cmds.text(' \nSwitch Control:') cmds.text(l = '') cmds.text(l = '') self.bQBC = cmds.intField() cmds.text(l = '') self.sCBC = cmds.textField() cmds.text(l = '') cmds.button(l = 'Help Docs', c = self.crBC.__doc__) cmds.setParent('..') cmds.button(l = 'Create', c = self.execCrBC) cmds.text(l = '') cmds.setParent('..') cmds.frameLayout(l = 'Make Spline IK Curve Stretch And Squash') cmds.rowColumnLayout(nc = 5, w = 10) cmds.text('Curve Name:') cmds.text(l = '') cmds.text('Setup Name:') cmds.text(l = '') cmds.text('Joint Quantity:') self.ikJBC = cmds.textField() cmds.text(l = '') self.fkJBC = cmds.textField() cmds.text(l = '') self.bindJBC = cmds.textField() cmds.text(' \nSwitch Control:') cmds.text(l = '') cmds.text(' \nGlobal Control:') cmds.text(l = '') cmds.text(l = '') self.bQBC = cmds.intField() cmds.text(l = '') self.sCBC = cmds.textField() cmds.text(l = '') cmds.button(l = 'Help Docs', c = self.crBC.__doc__) cmds.setParent('..') cmds.button(l = 'Create', c = self.execCrBC) cmds.setParent('..') cmds.showWindow(rigUI) r = RUI('charNameUI', 'gScaleUI', 'fingButtonGrp', 'thumbCheckBox', 'spineButtonGrp', 'neckButtonGrp', 'ikJBC', 'fkJBC', 'bindJBC', 'bQBC', 'sCBC') # last modified at 6.20 pm 29th June 2011

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • Asterisk SIP digest authentication username mismatch

    - by Matt
    I have an asterisk system that I'm attempting to get to work as a backup for our 3com system. We already use it for a conference bridge. Our phones are the 3com 3C10402B, so I don't have the issue of older 3com phones that come without a SIP image. The 3com phones are communicating SIP with the Asterisk, but are unable to register because they present a digest username value that doesn't match what Asterisk thinks it should. As an example, here are the relevant lines from a successful registration from a soft phone: Server sends: WWW-Authenticate: Digest algorithm=MD5, realm="asterisk", nonce="1cac3853" Phone responds: Authorization: Digest username="2321", realm="asterisk", nonce="1cac3853", uri="sip:192.168.254.12", algorithm=md5, response="d32df9ec719817282460e7c2625b6120" For the 3com phone, those same lines look like this (and fails): Server sends: WWW-Authenticate: Digest algorithm=MD5, realm="asterisk", nonce="6c915c33" Phone responds: Authorization: Digest username="sip:[email protected]", realm="asterisk", nonce="6c915c33", uri="sip:192.168.254.12", opaque="", algorithm=MD5, response="a89df25f19e4b4598595f919dac9db81" Basically, Asterisk wants to see a username in the Digest username field of 2321, but the 3com phone is sending sip:[email protected]. Anyone know how to tell asterisk to accept this format of username in the digest authentication? Here is the sip.conf info for that extension: [2321] deny=0.0.0.0/0.0.0.0 disallow=all type=friend secret=1234 qualify=yes port=5060 permit=0.0.0.0/0.0.0.0 nat=yes mailbox=2321@device host=dynamic dtmfmode=rfc2833 dial=SIP/2321 context=from-internal canreinvite=no callerid=device <2321 allow=ulaw, alaw call-limit=50 ... and for those interested in the grit, here is the debug output of the registration attempt: REGISTER sip:192.168.254.12 SIP/2.0 v: SIP/2.0/UDP 192.168.254.157:5060 t: f: i: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18580 REGISTER Max-Forwards: 70 m: ;dt=544 Expires: 3600 User-Agent: 3Com-SIP-Phone/V8.0.1.3 X-3Com-PhoneInfo: firstRegistration=no; primaryCallP=192.168.254.12; secondaryCallP=0.0.0.0; --- (11 headers 0 lines) --- Using latest REGISTER request as basis request Sending to 192.168.254.157 : 5060 (no NAT) SIP/2.0 100 Trying Via: SIP/2.0/UDP 192.168.254.157:5060;received=192.168.254.157 From: To: Call-ID: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18580 REGISTER User-Agent: Asterisk PBX Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY Supported: replaces Contact: Content-Length: 0 SIP/2.0 401 Unauthorized Via: SIP/2.0/UDP 192.168.254.157:5060;received=192.168.254.157 From: To: ;tag=as3fb867e2 Call-ID: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18580 REGISTER User-Agent: Asterisk PBX Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY Supported: replaces WWW-Authenticate: Digest algorithm=MD5, realm="asterisk", nonce="6c915c33" Content-Length: 0 Scheduling destruction of SIP dialog 'fa4451d8-01d6-1cc2-13e4-00e0bb33beb9' in 32000 ms (Method: REGISTER) confbridge*CLI REGISTER sip:192.168.254.12 SIP/2.0 v: SIP/2.0/UDP 192.168.254.157:5060 t: f: i: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18581 REGISTER Max-Forwards: 70 m: ;dt=544 Expires: 3600 User-Agent: 3Com-SIP-Phone/V8.0.1.3 Authorization: Digest username="sip:[email protected]", realm="asterisk", nonce="6c915c33", uri="sip:192.168.254.12", opaque="", algorithm=MD5, response="a89df25f19e4b4598595f919dac9db81" X-3Com-PhoneInfo: firstRegistration=no; primaryCallP=192.168.254.12; secondaryCallP=0.0.0.0; --- (12 headers 0 lines) --- Using latest REGISTER request as basis request Sending to 192.168.254.157 : 5060 (NAT) SIP/2.0 100 Trying Via: SIP/2.0/UDP 192.168.254.157:5060;received=192.168.254.157 From: To: Call-ID: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18581 REGISTER User-Agent: Asterisk PBX Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY Supported: replaces Contact: Content-Length: 0 SIP/2.0 403 Authentication user name does not match account name Via: SIP/2.0/UDP 192.168.254.157:5060;received=192.168.254.157 From: To: ;tag=as3fb867e2 Call-ID: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18581 REGISTER User-Agent: Asterisk PBX Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY Supported: replaces Content-Length: 0 Scheduling destruction of SIP dialog 'fa4451d8-01d6-1cc2-13e4-00e0bb33beb9' in 32000 ms (Method: REGISTER) Thanks for your input!

    Read the article

  • Ubuntu 9.10 and Squid 2.7 Transparent Proxy TCP_DENIED

    - by user38400
    Hi, We've spent the last two days trying to get squid 2.7 to work with ubuntu 9.10. The computer running ubuntu has two network interfaces: eth0 and eth1 with dhcp running on eth1. Both interfaces have static ip's, eth0 is connected to the Internet and eth1 is connected to our LAN. We have followed literally dozens of different tutorials with no success. The tutorial here was the last one we did that actually got us some sort of results: http://www.basicconfig.com/linuxnetwork/setup_ubuntu_squid_proxy_server_beginner_guide. When we try to access a site like seriouswheels.com from the LAN we get the following message on the client machine: ERROR The requested URL could not be retrieved Invalid Request error was encountered while trying to process the request: GET / HTTP/1.1 Host: www.seriouswheels.com Connection: keep-alive User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/532.9 (KHTML, like Gecko) Chrome/5.0.307.11 Safari/532.9 Cache-Control: max-age=0 Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,/;q=0.5 Accept-Encoding: gzip,deflate,sdch Cookie: __utmz=88947353.1269218405.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none); __qca=P0-1052556952-1269218405250; __utma=88947353.1027590811.1269218405.1269218405.1269218405.1; __qseg=Q_D Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Some possible problems are: Missing or unknown request method. Missing URL. Missing HTTP Identifier (HTTP/1.0). Request is too large. Content-Length missing for POST or PUT requests. Illegal character in hostname; underscores are not allowed. Your cache administrator is webmaster. Below are all the configuration files: /etc/squid/squid.conf, /etc/network/if-up.d/00-firewall, /etc/network/interfaces, /var/log/squid/access.log. Something somewhere is wrong but we cannot figure out where. Our end goal for all of this is the superimpose content onto every page that a client requests on the LAN. We've been told that squid is the way to do this but at this point in the game we are just trying to get squid setup correctly as our proxy. Thanks in advance. squid.conf acl all src all acl manager proto cache_object acl localhost src 127.0.0.1/32 acl to_localhost dst 127.0.0.0/8 acl localnet src 192.168.0.0/24 acl SSL_ports port 443 # https acl SSL_ports port 563 # snews acl SSL_ports port 873 # rsync acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl Safe_ports port 631 # cups acl Safe_ports port 873 # rsync acl Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access allow localnet http_access deny all icp_access allow localnet icp_access deny all http_port 3128 hierarchy_stoplist cgi-bin ? cache_dir ufs /var/spool/squid/cache1 1000 16 256 access_log /var/log/squid/access.log squid refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern (Release|Package(.gz)*)$ 0 20% 2880 refresh_pattern . 0 20% 4320 acl shoutcast rep_header X-HTTP09-First-Line ^ICY.[0-9] upgrade_http0.9 deny shoutcast acl apache rep_header Server ^Apache broken_vary_encoding allow apache extension_methods REPORT MERGE MKACTIVITY CHECKOUT cache_mgr webmaster cache_effective_user proxy cache_effective_group proxy hosts_file /etc/hosts coredump_dir /var/spool/squid access.log 1269243042.740 0 192.168.1.11 TCP_DENIED/400 2576 GET NONE:// - NONE/- text/html 00-firewall iptables -F iptables -t nat -F iptables -t mangle -F iptables -X echo 1 | tee /proc/sys/net/ipv4/ip_forward iptables -t nat -A POSTROUTING -j MASQUERADE iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3128 networking auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 142.104.109.179 netmask 255.255.224.0 gateway 142.104.127.254 auto eth1 iface eth1 inet static address 192.168.1.100 netmask 255.255.255.0

    Read the article

  • Asterisk SIP digest authentication username mismatch

    - by Matt
    I have an asterisk system that I'm attempting to get to work as a backup for our 3com system. We already use it for a conference bridge. Our phones are the 3com 3C10402B, so I don't have the issue of older 3com phones that come without a SIP image. The 3com phones are communicating SIP with the Asterisk, but are unable to register because they present a digest username value that doesn't match what Asterisk thinks it should. As an example, here are the relevant lines from a successful registration from a soft phone: Server sends: WWW-Authenticate: Digest algorithm=MD5, realm="asterisk", nonce="1cac3853" Phone responds: Authorization: Digest username="2321", realm="asterisk", nonce="1cac3853", uri="sip:192.168.254.12", algorithm=md5, response="d32df9ec719817282460e7c2625b6120" For the 3com phone, those same lines look like this (and fails): Server sends: WWW-Authenticate: Digest algorithm=MD5, realm="asterisk", nonce="6c915c33" Phone responds: Authorization: Digest username="sip:[email protected]", realm="asterisk", nonce="6c915c33", uri="sip:192.168.254.12", opaque="", algorithm=MD5, response="a89df25f19e4b4598595f919dac9db81" Basically, Asterisk wants to see a username in the Digest username field of 2321, but the 3com phone is sending sip:[email protected]. Anyone know how to tell asterisk to accept this format of username in the digest authentication? Here is the sip.conf info for that extension: [2321] deny=0.0.0.0/0.0.0.0 disallow=all type=friend secret=1234 qualify=yes port=5060 permit=0.0.0.0/0.0.0.0 nat=yes mailbox=2321@device host=dynamic dtmfmode=rfc2833 dial=SIP/2321 context=from-internal canreinvite=no callerid=device <2321 allow=ulaw, alaw call-limit=50 ... and for those interested in the grit, here is the debug output of the registration attempt: REGISTER sip:192.168.254.12 SIP/2.0 v: SIP/2.0/UDP 192.168.254.157:5060 t: f: i: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18580 REGISTER Max-Forwards: 70 m: ;dt=544 Expires: 3600 User-Agent: 3Com-SIP-Phone/V8.0.1.3 X-3Com-PhoneInfo: firstRegistration=no; primaryCallP=192.168.254.12; secondaryCallP=0.0.0.0; --- (11 headers 0 lines) --- Using latest REGISTER request as basis request Sending to 192.168.254.157 : 5060 (no NAT) SIP/2.0 100 Trying Via: SIP/2.0/UDP 192.168.254.157:5060;received=192.168.254.157 From: To: Call-ID: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18580 REGISTER User-Agent: Asterisk PBX Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY Supported: replaces Contact: Content-Length: 0 SIP/2.0 401 Unauthorized Via: SIP/2.0/UDP 192.168.254.157:5060;received=192.168.254.157 From: To: ;tag=as3fb867e2 Call-ID: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18580 REGISTER User-Agent: Asterisk PBX Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY Supported: replaces WWW-Authenticate: Digest algorithm=MD5, realm="asterisk", nonce="6c915c33" Content-Length: 0 Scheduling destruction of SIP dialog 'fa4451d8-01d6-1cc2-13e4-00e0bb33beb9' in 32000 ms (Method: REGISTER) confbridge*CLI REGISTER sip:192.168.254.12 SIP/2.0 v: SIP/2.0/UDP 192.168.254.157:5060 t: f: i: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18581 REGISTER Max-Forwards: 70 m: ;dt=544 Expires: 3600 User-Agent: 3Com-SIP-Phone/V8.0.1.3 Authorization: Digest username="sip:[email protected]", realm="asterisk", nonce="6c915c33", uri="sip:192.168.254.12", opaque="", algorithm=MD5, response="a89df25f19e4b4598595f919dac9db81" X-3Com-PhoneInfo: firstRegistration=no; primaryCallP=192.168.254.12; secondaryCallP=0.0.0.0; --- (12 headers 0 lines) --- Using latest REGISTER request as basis request Sending to 192.168.254.157 : 5060 (NAT) SIP/2.0 100 Trying Via: SIP/2.0/UDP 192.168.254.157:5060;received=192.168.254.157 From: To: Call-ID: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18581 REGISTER User-Agent: Asterisk PBX Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY Supported: replaces Contact: Content-Length: 0 SIP/2.0 403 Authentication user name does not match account name Via: SIP/2.0/UDP 192.168.254.157:5060;received=192.168.254.157 From: To: ;tag=as3fb867e2 Call-ID: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18581 REGISTER User-Agent: Asterisk PBX Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY Supported: replaces Content-Length: 0 Scheduling destruction of SIP dialog 'fa4451d8-01d6-1cc2-13e4-00e0bb33beb9' in 32000 ms (Method: REGISTER) Thanks for your input!

    Read the article

  • Cisco ASA 5505 site to site IPSEC VPN won't route from multiple LANs

    - by franklundy
    Hi I've set up a standard site to site VPN between 2 ASA 5505s (using the wizard in ASDM) and have the VPN working fine for traffic between Site A and Site B on the directly connected LANs. But this VPN is actually to be used for data originating on LAN subnets that are one hop away from the directly connected LANs. So actually there is another router connected to each ASA (LAN side) that then route to two completely different LAN ranges, where the clients and servers reside. At the moment, any traffic that gets to the ASA that has not originated from the directly connected LAN gets sent straight to the default gateway, and not through the VPN. I've tried adding the additional subnets to the "Protected Networks" on the VPN, but that has no effect. I have also tried adding a static route to each ASA trying to point the traffic to the other side, but again this hasn't worked. Here is the config for one of the sites. This works for traffic to/from the 192.168.144.x subnets perfectly. What I need is to be able to route traffic from 10.1.0.0/24 to 10.2.0.0/24 for example. ASA Version 8.0(3) ! hostname Site1 enable password ** encrypted names name 192.168.144.4 Site2 ! interface Vlan1 nameif inside security-level 100 ip address 192.168.144.2 255.255.255.252 ! interface Vlan2 nameif outside security-level 0 ip address 10.78.254.70 255.255.255.252 (this is a private WAN circuit) ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! passwd ** encrypted ftp mode passive access-list inside_access_in extended permit ip any any access-list outside_access_in extended permit icmp any any echo-reply access-list outside_1_cryptomap extended permit ip 192.168.144.0 255.255.255.252 Site2 255.255.255.252 access-list inside_nat0_outbound extended permit ip 192.168.144.0 255.255.255.252 Site2 255.255.255.252 pager lines 24 logging enable logging asdm informational mtu inside 1500 mtu outside 1500 icmp unreachable rate-limit 1 burst-size 1 asdm image disk0:/asdm-603.bin no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 0.0.0.0 0.0.0.0 access-group inside_access_in in interface inside access-group outside_access_in in interface outside route outside 0.0.0.0 0.0.0.0 10.78.254.69 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout uauth 0:05:00 absolute dynamic-access-policy-record DfltAccessPolicy aaa authentication ssh console LOCAL http server enable http 0.0.0.0 0.0.0.0 outside http 192.168.1.0 255.255.255.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto map outside_map 1 match address outside_1_cryptomap crypto map outside_map 1 set pfs crypto map outside_map 1 set peer 10.78.254.66 crypto map outside_map 1 set transform-set ESP-3DES-SHA crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 no crypto isakmp nat-traversal telnet timeout 5 ssh 0.0.0.0 0.0.0.0 outside ssh timeout 5 console timeout 0 management-access inside threat-detection basic-threat threat-detection statistics port threat-detection statistics protocol threat-detection statistics access-list group-policy DfltGrpPolicy attributes vpn-idle-timeout none username enadmin password * encrypted privilege 15 tunnel-group 10.78.254.66 type ipsec-l2l tunnel-group 10.78.254.66 ipsec-attributes pre-shared-key * ! ! prompt hostname context

    Read the article

  • How can I forward ALL traffic over a site-to-site VPN on Cisco ASA?

    - by Scott Clements
    Hi There, I currently have two Cisco ASA 5100 routers. They are at different physical sites and are configured with a site-to-site VPN which is active and working. I can communicate with the subnets on either site from the other and both are connected to the internet, however I need to ensure that all the traffic at my remote site goes through this VPN to my site here. I know that the web traffic is doing so as a "tracert" confirms this, but I need to ensure that all other network traffic is being directed over this VPN to my network here. Here is my config for the ASA router at my remote site: hostname ciscoasa domain-name xxxxx enable password 78rl4MkMED8xiJ3g encrypted names ! interface Ethernet0/0 nameif NIACEDC security-level 100 ip address x.x.x.x 255.255.255.0 ! interface Ethernet0/1 description External Janet Connection nameif JANET security-level 0 ip address x.x.x.x 255.255.255.248 ! interface Ethernet0/2 shutdown no nameif security-level 100 no ip address ! interface Ethernet0/3 shutdown no nameif security-level 100 ip address dhcp setroute ! interface Management0/0 nameif management security-level 100 ip address 192.168.100.1 255.255.255.0 management-only ! passwd 2KFQnbNIdI.2KYOU encrypted ftp mode passive clock timezone GMT/BST 0 clock summer-time GMT/BDT recurring last Sun Mar 1:00 last Sun Oct 2:00 dns domain-lookup NIACEDC dns server-group DefaultDNS name-server 154.32.105.18 name-server 154.32.107.18 domain-name XXXX same-security-traffic permit inter-interface same-security-traffic permit intra-interface access-list ren_access_in extended permit ip any any access-list ren_access_in extended permit tcp any any access-list ren_nat0_outbound extended permit ip 192.168.6.0 255.255.255.0 192.168.3.0 255.255.255.0 access-list NIACEDC_nat0_outbound extended permit ip 192.168.12.0 255.255.255.0 192.168.3.0 255.255.255.0 access-list JANET_20_cryptomap extended permit ip 192.168.12.0 255.255.255.0 192.168.3.0 255.255.255.0 access-list NIACEDC_access_in extended permit ip any any access-list NIACEDC_access_in extended permit tcp any any access-list JANET_access_out extended permit ip any any access-list NIACEDC_access_out extended permit ip any any pager lines 24 logging enable logging asdm informational mtu NIACEDC 1500 mtu JANET 1500 mtu management 1500 icmp unreachable rate-limit 1 burst-size 1 asdm image disk0:/asdm-522.bin no asdm history enable arp timeout 14400 nat-control global (NIACEDC) 1 interface global (JANET) 1 interface nat (NIACEDC) 0 access-list NIACEDC_nat0_outbound nat (NIACEDC) 1 192.168.12.0 255.255.255.0 access-group NIACEDC_access_in in interface NIACEDC access-group NIACEDC_access_out out interface NIACEDC access-group JANET_access_out out interface JANET route JANET 0.0.0.0 0.0.0.0 194.82.121.82 1 route JANET 0.0.0.0 0.0.0.0 192.168.3.248 tunneled timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout uauth 0:05:00 absolute http server enable http 192.168.12.0 255.255.255.0 NIACEDC http 192.168.100.0 255.255.255.0 management http 192.168.9.0 255.255.255.0 NIACEDC no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec transform-set ESP-AES-256-SHA esp-aes-256 esp-sha-hmac crypto map JANET_map 20 match address JANET_20_cryptomap crypto map JANET_map 20 set pfs crypto map JANET_map 20 set peer X.X.X.X crypto map JANET_map 20 set transform-set ESP-AES-256-SHA crypto map JANET_map interface JANET crypto isakmp enable JANET crypto isakmp policy 10 authentication pre-share encryption aes-256 hash sha group 2 lifetime 86400 crypto isakmp policy 30 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 crypto isakmp policy 50 authentication pre-share encryption aes-256 hash sha group 5 lifetime 86400 tunnel-group X.X.X.X type ipsec-l2l tunnel-group X.X.X.X ipsec-attributes pre-shared-key * telnet timeout 5 ssh timeout 5 console timeout 0 dhcpd address 192.168.100.2-192.168.100.254 management dhcpd enable management ! ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect rsh inspect rtsp inspect esmtp inspect sqlnet inspect skinny inspect sunrpc inspect xdmcp inspect sip inspect netbios inspect tftp inspect http ! service-policy global_policy global prompt hostname context no asdm history enable Thanks in advance, Scott

    Read the article

  • iptables - quick safety eval & limit max conns over time

    - by Peter Hanneman
    Working on locking down a *nix server box with some fancy iptable(v1.4.4) rules. I'm approaching the matter with a "paranoid, everyone's out to get me" style, not necessarily because I expect the box to be a hacker magnet but rather just for the sake of learning iptables and *nix security more throughly. Everything is well commented - so if anyone sees something I missed please let me know! The *nat table's "--to-ports" point to the only ports with actively listening services. (aside from pings) Layer 2 apps listen exclusively on chmod'ed sockets bridged by one of the layer 1 daemons. Layers 3+ inherit from layer 2 in a similar fashion. The two lines giving me grief are commented out at the very bottom of the *filter rules. The first line runs fine but it's all or nothing. :) Many thanks, Peter H. *nat #Flush previous rules, chains and counters for the 'nat' table -F -X -Z #Redirect traffic to alternate internal ports -I PREROUTING --src 0/0 -p tcp --dport 80 -j REDIRECT --to-ports 8080 -I PREROUTING --src 0/0 -p tcp --dport 443 -j REDIRECT --to-ports 8443 -I PREROUTING --src 0/0 -p udp --dport 53 -j REDIRECT --to-ports 8053 -I PREROUTING --src 0/0 -p tcp --dport 9022 -j REDIRECT --to-ports 8022 COMMIT *filter #Flush previous settings, chains and counters for the 'filter' table -F -X -Z #Set default behavior for all connections and protocols -P INPUT DROP -P OUTPUT DROP -A FORWARD -j DROP #Only accept loopback traffic originating from the local NIC -A INPUT -i lo -j ACCEPT -A INPUT ! -i lo -d 127.0.0.0/8 -j DROP #Accept all outgoing non-fragmented traffic having a valid state -A OUTPUT ! -f -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT #Drop fragmented incoming packets (Not always malicious - acceptable for use now) -A INPUT -f -j DROP #Allow ping requests rate limited to one per second (burst ensures reliable results for high latency connections) -A INPUT -p icmp --icmp-type 8 -m limit --limit 1/sec --limit-burst 2 -j ACCEPT #Declaration of custom chains -N INSPECT_TCP_FLAGS -N INSPECT_STATE -N INSPECT #Drop incoming tcp connections with invalid tcp-flags -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL ALL -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL NONE -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ACK,FIN FIN -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ACK,PSH PSH -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ACK,URG URG -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags SYN,FIN SYN,FIN -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL FIN,PSH,URG -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags FIN,RST FIN,RST -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags SYN,RST SYN,RST -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL SYN,FIN,PSH,URG -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL SYN,RST,ACK,FIN,URG -j DROP #Accept incoming traffic having either an established or related state -A INSPECT_STATE -m state --state ESTABLISHED,RELATED -j ACCEPT #Drop new incoming tcp connections if they aren't SYN packets -A INSPECT_STATE -m state --state NEW -p tcp ! --syn -j DROP #Drop incoming traffic with invalid states -A INSPECT_STATE -m state --state INVALID -j DROP #INSPECT chain definition -A INSPECT -p tcp -j INSPECT_TCP_FLAGS -A INSPECT -j INSPECT_STATE #Route incoming traffic through the INSPECT chain -A INPUT -j INSPECT #Accept redirected HTTP traffic via HA reverse proxy -A INPUT -p tcp --dport 8080 -j ACCEPT #Accept redirected HTTPS traffic via STUNNEL SSH gateway (As well as tunneled HTTPS traffic destine for other services) -A INPUT -p tcp --dport 8443 -j ACCEPT #Accept redirected DNS traffic for NSD authoritative nameserver -A INPUT -p udp --dport 8053 -j ACCEPT #Accept redirected SSH traffic for OpenSSH server #Temp solution: -A INPUT -p tcp --dport 8022 -j ACCEPT #Ideal solution: #Limit new ssh connections to max 10 per 10 minutes while allowing an "unlimited" (or better reasonably limited?) number of established connections. #-A INPUT -p tcp --dport 8022 --state NEW,ESTABLISHED -m recent --set -j ACCEPT #-A INPUT -p tcp --dport 8022 --state NEW -m recent --update --seconds 600 --hitcount 11 -j DROP COMMIT *mangle #Flush previous rules, chains and counters in the 'mangle' table -F -X -Z COMMIT

    Read the article

  • More than 100,000 articles !

    - by developerit
    In one month, we already got more than 100,000, and we continue to crawl! We plan on hitting 250,000 total articles next month. Due to the large amount of data we are gathering, we are planning on updating our SQL stored procedure to improve performance. We may be migrating to SQL Server 2008 Entreprise, as we are currently running on SQL Server 2005 Express Edition… We are at 400 Mb of data, getting more and more close to the 2 Gb limit. Stay tune for more info and browse daily fresh articles about web development.

    Read the article

  • 8 Things You Can Do In Android’s Developer Options

    - by Chris Hoffman
    The Developer Options menu in Android is a hidden menu with a variety of advanced options. These options are intended for developers, but many of them will be interesting to geeks. You’ll have to perform a secret handshake to enable the Developer Options menu in the Settings screen, as it’s hidden from Android users by default. Follow the simple steps to quickly enable Developer Options. Enable USB Debugging “USB debugging” sounds like an option only an Android developer would need, but it’s probably the most widely used hidden option in Android. USB debugging allows applications on your computer to interface with your Android phone over the USB connection. This is required for a variety of advanced tricks, including rooting an Android phone, unlocking it, installing a custom ROM, or even using a desktop program that captures screenshots of your Android device’s screen. You can also use ADB commands to push and pull files between your device and your computer or create and restore complete local backups of your Android device without rooting. USB debugging can be a security concern, as it gives computers you plug your device into access to your phone. You could plug your device into a malicious USB charging port, which would try to compromise you. That’s why Android forces you to agree to a prompt every time you plug your device into a new computer with USB debugging enabled. Set a Desktop Backup Password If you use the above ADB trick to create local backups of your Android device over USB, you can protect them with a password with the Set a desktop backup password option here. This password encrypts your backups to secure them, so you won’t be able to access them if you forget the password. Disable or Speed Up Animations When you move between apps and screens in Android, you’re spending some of that time looking at animations and waiting for them to go away. You can disable these animations entirely by changing the Window animation scale, Transition animation scale, and Animator duration scale options here. If you like animations but just wish they were faster, you can speed them up. On a fast phone or tablet, this can make switching between apps nearly instant. If you thought your Android phone was speedy before, just try disabling animations and you’ll be surprised how much faster it can seem. Force-Enable FXAA For OpenGL Games If you have a high-end phone or tablet with great graphics performance and you play 3D games on it, there’s a way to make those games look even better. Just go to the Developer Options screen and enable the Force 4x MSAA option. This will force Android to use 4x multisample anti-aliasing in OpenGL ES 2.0 games and other apps. This requires more graphics power and will probably drain your battery a bit faster, but it will improve image quality in some games. This is a bit like force-enabling antialiasing using the NVIDIA Control Panel on a Windows gaming PC. See How Bad Task Killers Are We’ve written before about how task killers are worse than useless on Android. If you use a task killer, you’re just slowing down your system by throwing out cached data and forcing Android to load apps from system storage whenever you open them again. Don’t believe us? Enable the Don’t keep activities option on the Developer options screen and Android will force-close every app you use as soon as you exit it. Enable this app and use your phone normally for a few minutes — you’ll see just how harmful throwing out all that cached data is and how much it will slow down your phone. Don’t actually use this option unless you want to see how bad it is! It will make your phone perform much more slowly — there’s a reason Google has hidden these options away from average users who might accidentally change them. Fake Your GPS Location The Allow mock locations option allows you to set fake GPS locations, tricking Android into thinking you’re at a location where you actually aren’t. Use this option along with an app like Fake GPS location and you can trick your Android device and the apps running on it into thinking you’re at locations where you actually aren’t. How would this be useful? Well, you could fake a GPS check-in at a location without actually going there or confuse your friends in a location-tracking app by seemingly teleporting around the world. Stay Awake While Charging You can use Android’s Daydream Mode to display certain apps while charging your device. If you want to force Android to display a standard Android app that hasn’t been designed for Daydream Mode, you can enable the Stay awake option here. Android will keep your device’s screen on while charging and won’t turn it off. It’s like Daydream Mode, but can support any app and allows users to interact with them. Show Always-On-Top CPU Usage You can view CPU usage data by toggling the Show CPU usage option to On. This information will appear on top of whatever app you’re using. If you’re a Linux user, the three numbers on top probably look familiar — they represent the system load average. From left to right, the numbers represent your system load over the last one, five, and fifteen minutes. This isn’t the kind of thing you’d want enabled most of the time, but it can save you from having to install third-party floating CPU apps if you want to see CPU usage information for some reason. Most of the other options here will only be useful to developers debugging their Android apps. You shouldn’t start changing options you don’t understand. If you want to undo any of these changes, you can quickly erase all your custom options by sliding the switch at the top of the screen to Off.     

    Read the article

  • Database Table Prefixes

    - by DoctorMick
    We're having a few discussions at work around the naming of our database tables. We're working on a large application with approx 100 database tables (ok, so it isn't that large), most of which can be categorized in to different functional area, and we're trying to work out the best way of naming/organizing these within an Oracle database. The three current options are: Create the different functional areas in separate schemas. Create everything in the same schema but prefix the tables with the functional area Create everything in the same schema with no prefixes We have various pro's and con's around each one but I'd be interested to hear everyone's opinions on what the best solution is.

    Read the article

  • Silverlight Cream for November 20, 2011 - 2 -- #1170

    - by Dave Campbell
    In this Issue: Michael Washington, Oliver Fuh, Jeremy Likness, Derik Whittaker, Jesse Liberty, Jeff Blankenburg(-2-), and Michael Crump. Above the Fold: Silverlight: "Handling Extremely Large Data Sets in Silverlight" Jeremy Likness WP7: "31 Days of Mango | Day #8: Contacts API" Jeff Blankenburg LightSwitch: "LightSwitch Chat Application Using A Data Source Extension" Michael Washington Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up Check out Shawn Wildermuth's take on the AppStore and WP7 in general: 40,000 Apps - What Does It Mean? Be sure to check out Jesse Liberty & Paul Betts new book: Programming Reactive Extensions and LINQ, I've just had a little time to look at mine, but don't let the size fool you... this is the good stuff! From SilverlightCream.com: LightSwitch Chat Application Using A Data Source Extension In his latest LightSwitch post, Michael Washington gives up code that will enable two people using the same LightSwitch app to chat. Great detailed tutorial as usual! Handling AdControl Fetching Exception WindowsPhoneGeek turns the blog reigns over to Oliver Fuh for this post about using the AdControl in your WP7 app and handling a common exception you get with the Microsoft AdControl Handling Extremely Large Data Sets in Silverlight In this excerpt from his book, Jeremy Likness discusses reading *LARGE* data sets with Silverlight using 3 different patterns: OData, WCF RIA Services, and MVVM. Using MVVM with the AutoCompleteTextBox in Silverlight 4 Derik Whittaker takes a break from WinRT to discuss the Silverlight 4 AutoCompleteTextBox and MVVM ... including a custom Behavior to allow the backing property to be updated and a command to trigger background searches Yet Another Podcast #52–Peter Torr on Windows Phone Multitasking Jesse Liberty scored Peter Torr on his Latest Yet Another Podcast .. talking about Multitasking on Windows Phone including background agents, the backstack, and other Mango features 31 Days of Mango | Day #8: Contacts API Jeff Blankenburg's Day 8 is about a new namespace on WP7: Microsoft.Phone.UserData ... now giving us the ability to treat the user's contact list like a local database 31 Days of Mango | Day #9: Calendar API On Day 9 in his series, Jeff Blankenburg revisits the Microsoft.Phone.UserData namespace and looks at another set of data: the calendar Want to Decompile Silverlight XAP files? Try JustDecompile Beta! Michael Crump has a post up about the new free developer productivity tool from Telerik that provides assembly browsing and decompiling: JustDecompile ... Just download it! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Best solution for multiplayer realtime Android game

    - by piotrek
    I plan to make multiplayer realtime game for Android (2-8 players), and I consider, which solution for multiplayer organization is the best: Make server on PC, and client on mobile, all communition go through server ( ClientA - PC SERVER - All Clients ) Use bluetooth, I don't used yet, and I don't know is it hard to make multiplayer on bluetooth Make server on one of devices, and other devices connect ( through network, but I don't know is it hard to resolve problem with devices over NAT ? ) Other solution ?

    Read the article

  • SQL SERVER – Denali Feature – Zoom Query Editor

    - by pinaldave
    SQL Server next version ‘Denali’ is coming up with very neat feature which can be used while presentations, group discussion or for people who prefers large fonts. I have increased the font size to 400 percentage and for the same reason they are very large. You can adjust the font size which is convenient to you. One more reason to go for next version of SQL Server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Design Pattern for Social Game Mission Mechanics

    - by Furkan ÇALISKAN
    When we want to design a mission sub-system like in the The Ville or Sims Social, what kind of design pattern / idea would fit the best? There may be relation between missions (first do this then this etc...) or not. What do you think sims social or the ville or any other social games is using for this? I'm looking for a best-practise method to contruct a mission framework for tha game. How the well-known game firms do this stuff for their large scale social facebook games? Giving missions to the players and wait players to complete them. when they finished the missions, providing a method to catch this mission complete events considering large user database by not using server-side not so much to prevent high-traffic / resource consumption. how should i design the database and server-client communication to achive this design condidering this trade-off.

    Read the article

  • JavaScript Intellisense Improvements with VS 2010

    This is the twentieth in a series of blog posts Im doing on the upcoming VS 2010 and .NET 4 release.  Todays blog post covers some of the nice improvements coming with JavaScript intellisense with VS 2010 and the free Visual Web Developer 2010 Express.  Youll find with VS 2010 that JavaScript Intellisense loads much faster for large script files and with large libraries, and that it now provides statement completion support for more advanced scenarios compared to previous versions...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • A Plea for Plain English

    - by Tony Davis
    The English language has, within a lifetime, emerged as the ubiquitous 'international language' of scientific, political and technical communication. On the one hand, learning a single, common language, International English, has made it much easier to participate in and adopt new technologies; on the other hand it must be exasperating to have to use English at international conferences, or on community sites, when your own language has a long tradition of scientific and technical usage. It is also hard to master the subtleties of using a foreign language to explain advanced ideas. This requires English speakers to be more considerate in their writing. Even if you’re used to speaking English, you may be brought up short by this sort of verbiage… "Business Intelligence delivering actionable insights is becoming more critical in the enterprise, and these insights require large data volumes for trending and forecasting" It takes some imagination to appreciate the added hassle in working out what it means, when English is a language you only use at work. Try, just to get a vague feel for it, using Google Translate to translate it from English to Chinese and back again. "Providing actionable business intelligence point of view is becoming more and more and more business critical, and requires that these insights and projected trends in large amounts of data" Not easy eh? If you normally use a different language, you will need to pause for thought before finally working out that it really means … "Every Business Intelligence solution must be able to help companies to make decisions. In order to detect current trends, and accurately predict future ones, we need to analyze large volumes of data" Surely, it is simple politeness for English speakers to stop peppering their writing with a twisted vocabulary that renders it inaccessible to everyone else. It isn’t just the problem of writers who use long words to give added dignity to their prose. It is the use of Colloquial English. This changes and evolves at a dizzying rate, adding new terms and idioms almost daily; it is almost a new and separate language. By contrast, ‘International English', is gradually evolving separately, at its own, more sedate, pace. As such, all native English speakers need to make an effort to learn, and use it, switching from casual colloquial patter into a simpler form of communication that can be widely understood by different cultures, even if it gives you less credibility on the street. Simple-Talk is based, at least in part, on the idea that technical articles can be written simply and clearly in a form of English that can be easily understood internationally, and that they can be written, with a little editorial help, by anyone, and read by anyone, regardless of their native language. Cheers, Tony.

    Read the article

  • Friday Fun: Snow Crusher

    - by Asian Angel
    It has probably been a long week whether you have already returned to work or are finishing up the last of your vacation time. If you are in need of some stress relief, then we have we the perfect game for you. This week you get to be totally fiendish and use a monster size snowball to destroy as many cars as possible at the local snow lodge. Snow Crusher The object of the game is simple…create as large of a monster snowball as you can and then send it down over the side of the mountain to destroy the cars at the snow lodge. You can choose from three different sizes of monster snowballs to create. We chose the “Snowflake Size” for our reign of destruction. Once you have chosen a monster snowball size, all that is left to do is select the control method that works best for you. As soon as you select the control method, your monster snowball creation will automatically begin. Keep in mind that the faster your snowball goes the harder it can become to steer if you make sudden movements… At the top you can watch your progress towards the drop-off point and the green boxes highlighted at the bottom indicate how large of an item (such as trees or boulders) your snowball can roll over and add to the total mass. Snowball speed is shown in the lower right corner. Time to roll! As soon as the first green box is lit up you can start adding small trees to your snowball’s mass. You will want to avoid larger items as you go because they will penalize your score, slow you down, and reduce the size of your snowball! Halfway to the drop-off point and our snowball is now able to grab up larger trees. If you have not hit any large items along the way, your snowball will definitely be moving along at a good rate by now. When you reach the end of the mass building area, your snowball will pop out into the open and get ready to drop off over the side of the mountain. Go snowball go! Yes! Thirteen cars crushed and ready for the scrap yard… If the “Snowflake Size” snowball can do this, just think what the “Avalanche Size” can do with three minutes of time to build up mass! Have fun with those monster snowballs! Play Snow Crusher Latest Features How-To Geek ETC The 20 Best How-To Geek Linux Articles of 2010 The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Classic Super Mario Brothers Theme for Chrome and Iron Experimental Firefox Builds Put Tabs on the Title Bar (Available for Download) Android Trojan Found in the Wild Chaos, Panic, and Disorder Wallpaper Enjoy Christmas Beyond the Holiday with Christmas Eve Crisis Parrotfish Extends the Number of Services Accessible in Twitter Previews

    Read the article

  • World Record Performance on PeopleSoft Enterprise Financials Benchmark on SPARC T4-2

    - by Brian
    Oracle's SPARC T4-2 server achieved World Record performance on Oracle's PeopleSoft Enterprise Financials 9.1 executing 20 Million Journals lines in 8.92 minutes on Oracle Database 11g Release 2 running on Oracle Solaris 11. This is the first result published on this version of the benchmark. The SPARC T4-2 server was able to process 20 million general ledger journal edit and post batch jobs in 8.92 minutes on this benchmark that reflects a large customer environment that utilizes a back-end database of nearly 500 GB. This benchmark demonstrates that the SPARC T4-2 server with PeopleSoft Financials 9.1 can easily process 100 million journal lines in less than 1 hour. The SPARC T4-2 server delivered more than 146 MB/sec of IO throughput with Oracle Database 11g running on Oracle Solaris 11. Performance Landscape Results are presented for PeopleSoft Financials Benchmark 9.1. Results obtained with PeopleSoft Financials Benchmark 9.1 are not comparable to the the previous version of the benchmark, PeopleSoft Financials Benchmark 9.0, due to significant change in data model and supports only batch. PeopleSoft Financials Benchmark, Version 9.1 Solution Under Test Batch (min) SPARC T4-2 (2 x SPARC T4, 2.85 GHz) 8.92 Results from PeopleSoft Financials Benchmark 9.0. PeopleSoft Financials Benchmark, Version 9.0 Solution Under Test Batch (min) Batch with Online (min) SPARC Enterprise M4000 (Web/App) SPARC Enterprise M5000 (DB) 33.09 34.72 SPARC T3-1 (Web/App) SPARC Enterprise M5000 (DB) 35.82 37.01 Configuration Summary Hardware Configuration: 1 x SPARC T4-2 server 2 x SPARC T4 processors, 2.85 GHz 128 GB memory Storage Configuration: 1 x Sun Storage F5100 Flash Array (for database and redo logs) 2 x Sun Storage 2540-M2 arrays and 2 x Sun Storage 2501-M2 arrays (for backup) Software Configuration: Oracle Solaris 11 11/11 SRU 7.5 Oracle Database 11g Release 2 (11.2.0.3) PeopleSoft Financials 9.1 Feature Pack 2 PeopleSoft Supply Chain Management 9.1 Feature Pack 2 PeopleSoft PeopleTools 8.52 latest patch - 8.52.03 Oracle WebLogic Server 10.3.5 Java Platform, Standard Edition Development Kit 6 Update 32 Benchmark Description The PeopleSoft Enterprise Financials 9.1 benchmark emulates a large enterprise that processes and validates a large number of financial journal transactions before posting the journal entry to the ledger. The validation process certifies that the journal entries are accurate, ensuring that ChartFields values are valid, debits and credits equal out, and inter/intra-units are balanced. Once validated, the entries are processed, ensuring that each journal line posts to the correct target ledger, and then changes the journal status to posted. In this benchmark, the Journal Edit & Post is set up to edit and post both Inter-Unit and Regular multi-currency journals. The benchmark processes 20 million journal lines using AppEngine for edits and Cobol for post processes. See Also Oracle PeopleSoft Benchmark White Papers oracle.com SPARC T4-2 Server oracle.com OTN PeopleSoft Financial Management oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 1 October 2012.

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >