Search Results

Search found 20592 results on 824 pages for 'path variables'.

Page 399/824 | < Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >

  • Designing javascript chart library

    - by coolscitist
    I started coding a chart library on top of d3js: My chart library. I read Javascript API reusability and Towards reusable charts. However, I am NOT really following the suggestions because I am not really convinced about them. This is how my library can be used to create a bubble chart: var chart = new XYBubbleChart(); chart.data = [{"xValue":200,"yValue":300},{"xValue":400,"yValue":200},{"xValue":100,"yValue":310}]; //set data chart.dataKey.x = "xValue"; chart.dataKey.y = "yValue"; chart.elementId = "#chart"; chart.createChart(); Here are my questions: It does not use chaining. Is it a big issue? Every property and function is exposed publicly. (Example: width, height are exposed in Chart.js). OOP is all about abstraction and hiding, but I don't really see the point right now. I think exposing everything gives flexibility to change property and functionality inside subclasses and objects without writing a lot of code. What could be pitfalls of such exposure? I have implemented functions like: zooming, "showing info boxes when data point is clicked" as "abilities". (example: XYZoomingAbility.js). Basically, such "abilities" accept "chart" object, play around with public variables of "chart" to add functionality. What this allows me to do is to add an ability by writing: activateZoomAbility(chartObject); My goal is to separate "visualization" from "interactivity". I want "interactivity" like: zooming to be plugged into the chart rather than built inside the chart. Like, I don't want my bubble chart to know anything about "zooming". However, I do want zoomable bubble chart. What is the best way to do this? How to test and what to test? I have written mixed tests: jasmine and actual html files so that I can test manually on browser.

    Read the article

  • MySQL Policy-Based Auditing Webinar Recording Now Availabile

    - by Rob Young
    For those who missed the live event, the recording of the "How to Add Policy-Based Auditing to your MySQL Applications" webinar is now available.  You can view it here. This presentation builds on my earlier blog post on MySQL Enterprise Audit that was announced at MySQL Connect in late September.  The web presentation expands on the introductory blog and covers: The regulatory problem to be solved (internal audit, PCI, Sarbanes-Oxley, HIPAA, others) MySQL Audit solutions for both Community and Enterprise users: General Log - use the basic features of the MySQL server MySQL 5.5 open audit API - or use your time and talent to build your own solution MySQL Enterprise Audit - or use the out of the box, ready for production solution from MySQL Simple, step-by-step process for installing, enabling and configuring the MySQL Enterprise Audit plugin for use with existing apps New variables and options for tuning the MySQL Enterprise Audit plugin for your specific use case Best practices for securing and managing audit log files and archived images Roadmap for adding an integrated solution around MySQL Enterprise Audit for MySQL only and Oracle/MySQL shops You can learn all the technical details on MySQL Enterprise Audit in the MySQL docs and learn all about MySQL Enterprise Edition and Auditing here. As always, thanks for your support of MySQL!

    Read the article

  • Scaling background without scaling foreground in platformer?

    - by David Xu
    I'm currently developing a platform game and I've run into a problem with scaling resolutions. I want a different resolution of the game to still display the foreground unscaled (characters, tiles, etc) but I want the background to be scaled to fit into the window. To explain this better, my viewport has 4 variables: (x, y, width, height) where x and y are the top left corner and width and height are the dimensions. These can be either 800x600, 1024x768 or 1280x960. When I design my levels, I design everything for the highest resolution (1280x960) and expect the game engine to scale it down if a user is running in a lower resolution. I have tried the following to make it work but nothing I've come up with solves it so far: scale = view->width/1280; drawX = x * scale; drawY = y * scale; (this makes the translation too small for low resolution) and scale = view->width/1280; bgWidth = background->width*scale; bgHeight = background->height*scale; drawX = x + background->width/2 - bgWidth/2; drawY = y + background->height/2 - bgHeight/2; (this makes the translation completely wrong at the edges of the map) The thing is, no matter what resolution the game is run at, the map remains the same size, and the foreground is unscaled. (With a lower resolution you just see less of the foreground in the viewport) I was wondering if anyone had any idea how to solve this problem? Thank you in advance!

    Read the article

  • What do you call an obfuscator that isn't an obfuscator?

    - by Alex.Davies
    SmartAssembly, formerly {smartassembly}, version 5 is now available as an Early Access Build. You can get it here: http://www.red-gate.com/MessageBoard/viewforum.php?f=116 We're having second thoughts about the name change though. It isn't that we like the curly brackets, far from it. The trouble is that the first rule of product naming is to name a product by what it does. SmartAssembly may make an assembly smarter, but that's not something people really google for. The trouble is, I can't think of a better name for it. That's because SmartAssembly really does two completely separate things: Obfuscates Sets up your assembly for the awesome exception reports which get sent to you whenever your application crashes. You may have been (un?)lucky enough to see one in reflector if you use it. This is what those exception reports look like when they arrive back with the developer: Look at all those local variables! If you ask me, this is much cooler than the obfuscation. So obviously we don't want to call it just "Red Gate Obfuscator" or something, because it doesn't do justice to the exception reporting. What would you call it?

    Read the article

  • Using ASP.NET C# and Javascript

    - by ctck
    I'm looking for the most efficient / standardized way of passing data between client javascript code and C# code behind in an ASP.NET application. Currently ive been using the following methods to achieve this but they all feel a bit like a fudge. The way i pass data from javascript to the C# code behind is by setting hidden asp variables and triggering a postback <asp:HiddenField ID="RandomList" runat="server" /> function SetDataField(data) { document.getElementById('<%=RandomList.ClientID%>').value = data; } Then in C# code i collect the list protected void GetData(object sender, EventArgs e) { var _list = RandomList.value; } Going back the other way i often use either scriptmanager to register a function and pass it data during Page_Load: ScriptManager.RegisterStartupScript(this.GetType(), "Set","get("Test();",true); or i add attributes to controls before a post back or during Initialization / pre rendering stages: Btn.Attributes.Add("onclick", "DisplayMessage("Hello");"); These methods have served me well and do the job. However they just dont feel complete. Is there a more standardized way of passing data between client side markup / javascript and backend code. Ive seen some posts like this one: Injecting JavaScrip : StackOverflow that describe HtmlElement class. Is this something is should look into? Thanks everyone for your time.

    Read the article

  • Tips &amp; Tricks: How to crawl a SSL enabled Oracle E-Business Suite

    - by Rajesh Ghosh
    Oracle E-Business Suite can be integrated with Oracle Secure Enterprise Search for a superior end user experience and enhanced data retrieval capabilities. Before end-users can perform search operations, data has to be crawled and indexed into Oracle SES server. However if the Oracle E-Business Suite instance is on SSL, some additional configurations are needed in Oracle SES server as well as in Oracle Search Modeler, before a search object can be deployed and crawled. The process involves the following steps: Step 1: Export the SSL certificate of Oracle E-Business Suite Access the Oracle E-business Suite instance from a web browser. You should be able to locate a security or certificate icon somewhere in the browser toolbar or status bar, depending on which browser you are using. Click on it and you should be able to view the certificate as well as export it to a local file. While exporting make sure that you use “DER encoded” format. Step 2: Import the SSL certificate into Oracle Secure Enterprise server’s java key-store Oracle SES (10.1.8.4) by default ships a JDK under $ORACLE_HOME. The Oracle SES mid-tier uses this jdk to start the oc4j container services. In this step the Oracle E-Business Suite’s SSL certificate which has been exported in step #1, has to be imported into the Oracle SES server’s java key store. Perform the following: Copy the certificate file onto the server where Oracle SES server is running; under $ORACLE_HOME/jdk/jre/lib/security/cacerts. “ORACLE_HOME” points to the Oracle SES oracle home. Set the JAVA_HOME environment variable to $ORACLE_HOME/jdk. Append $JAVA_HOME/bin to the PATH environment variable Issue the command :  “keytool -import -keystore keystore.jks -trustcacerts -alias myOHS –file ebs.crt” . Please substitute “ebs.crt” with the name of the certificate file you copied in step #2.1. The default key-store password “changeit”. Enter the same when prompted. If successful this process will end with a message saying “certificate successfully imported”. Step 3: Import the SSL certificate into Search Modeler java key-store Unlike Oracle SES, Search Modeler is not shipped with a bundled JDK. If you are using standalone OC4J, then you actually use an external JDK to start the oc4j container services. If you are using IAS instance then the JDK comes bundled with the IAS installation. Perform the following: Copy the certificate file onto the server where Search Modeler application is running; under $JDK_HOME/jre/lib/security/cacerts. “JDK_HOME” points to the JDK directory depending on whether you are using external JDK or a bundled one. Set the JAVA_HOME environment variable to JDK directory. Append $JAVA_HOME/bin to the PATH environment variable Issue the command :  “keytool -import -keystore keystore.jks -trustcacerts -alias myOHS –file ebs.crt” . Please substitute “ebs.crt” with the name of the certificate file you copied in step #3.1. The default key-store password “changeit”. Enter the same when prompted. If successful this process will end with a message saying “certificate successfully imported”. Once you have completed the above steps successfully, you can deploy the search objects using Search Modeler and then start crawling them as well.

    Read the article

  • Teach Your Kid to Code (&hellip;and Vote early!)

    - by Steve Michelotti
    Next Tuesday I will be at the CMAP main meeting presenting Teach Your Kid to Code. Next Tuesday is of course Election Day so you have to make sure you vote early in order to get over to CMAP for the 7:00PM presentation. I will be co-presenting this talk with my 5th grade son. Here is the abstract: Have you ever wanted a way to teach your kid to code? For that matter, have you ever wanted to simply be able to explain to your kid what you do for a living? Putting things in a context that a kid can understand is not as easy as it sounds. If you are someone curious about these concepts, this is a “can’t miss” presentation that will be co-presented by Justin Michelotti (5th grader) and his father. Bring your kid with you to CMAP for this fun and educational session. We will show tools you may not have been aware of like SmallBasic and Kodu – we’ll even throw in a little Visual Studio and Windows 8! Concepts such as variables, conditionals, loops, and functions will be covered while we introduce object oriented concepts without any of the confusing words. Kids are not required for entry! I promise this will be an entertaining presentation! We hope to see you (and your kids) there. Click here for details.

    Read the article

  • Problem with script substitution when running script

    - by tucaz
    Hi! I'm new to Linux so this probably should be an easy fix, but I cannot see it. I have a script downloaded from official sources that is used to install additional tools for fsharp but it gives me a syntax error when running it. I tried to replace ( and ) by { and } but eventually it lead me to another error so I think this is not the problem since the script works for everybody. I read some articles that say that my bash version maybe is not the right one. I'm using Ubuntu 10.10 and here is the error: install-bonus.sh: 28: Syntax error: "(" unexpected (expecting "}") And this is line 27, 28 and 29: { declare -a DIRS=("${!3}") FILE=$2 And the full script: #! /bin/sh -e PREFIX=/usr BIN=$PREFIX/bin MAN=$PREFIX/share/man/man1/ die() { echo "$1" &2 echo "Installation aborted." &2 exit 1 } echo "This script will install additional material for F# including" echo "man pages, fsharpc and fsharpi scripts and Gtk# support for F#" echo "Interactive (root access needed)" echo "" # ------------------------------------------------------------------------------ # Utility function that searches specified directories for a specified file # and if the file is not found, it asks user to provide a directory RESULT="" searchpaths() { declare -a DIRS=("${!3}") FILE=$2 DIR=${DIRS[0]} for TRYDIR in ${DIRS[@]} do if [ -f $TRYDIR/$FILE ] then DIR=$TRYDIR fi done while [ ! -f $DIR/$FILE ] do echo "File '$FILE' was not found in any of ${DIRS[@]}. Please enter $1 installation directory:" read DIR done RESULT=$DIR } # ------------------------------------------------------------------------------ # Locate F# installation directory - this is needed, because we want to # add environment variable with it, generate 'fsharpc' and 'fsharpi' and also # copy load-gtk.fsx to that directory # ------------------------------------------------------------------------------ PATHS=( $1 /usr/lib/fsharp /usr/lib/shared/fsharp ) searchpaths "F# installation" FSharp.Core.dll PATHS[@] FSHARPDIR=$RESULT echo "Successfully found F# installation directory." # ------------------------------------------------------------------------------ # Check that we have everything we need # ------------------------------------------------------------------------------ [ $(id -u) -eq 0 ] || die "Please run the script as root." which mono /dev/null || die "mono not found in PATH." # ------------------------------------------------------------------------------ # Make sure that all additional assemblies are in GAC # ------------------------------------------------------------------------------ echo "Installing additional F# assemblies to the GAC" gacutil -i $FSHARPDIR/FSharp.Build.dll gacutil -i $FSHARPDIR/FSharp.Compiler.dll gacutil -i $FSHARPDIR/FSharp.Compiler.Interactive.Settings.dll gacutil -i $FSHARPDIR/FSharp.Compiler.Server.Shared.dll # ------------------------------------------------------------------------------ # Install additional files # ------------------------------------------------------------------------------ # Install man pages echo "Installing additional F# commands, scripts and man pages" mkdir -p $MAN cp *.1 $MAN # Export the FSHARP_COMPILER_BIN environment variable if [[ ! "$OSTYPE" =~ "darwin" ]]; then echo "export FSHARP_COMPILER_BIN=$FSHARPDIR" fsharp.sh mv fsharp.sh /etc/profile.d/ fi # Generate 'load-gtk.fsx' script for F# Interactive (ask user if we cannot find binaries) PATHS=( /usr/lib/mono/gtk-sharp-2.0 /usr/lib/cli/gtk-sharp-2.0 /Library/Frameworks/Mono.framework/Versions/2.8/lib/mono/gtk-sharp-2.0 ) searchpaths "Gtk#" gtk-sharp.dll PATHS[@] GTKDIR=$RESULT echo "Successfully found Gtk# root directory." PATHS=( /usr/lib/mono/gtk-sharp-2.0 /usr/lib/cli/glib-sharp-2.0 /Library/Frameworks/Mono.framework/Versions/2.8/lib/mono/gtk-sharp-2.0 ) searchpaths "Glib" glib-sharp.dll PATHS[@] GLIBDIR=$RESULT echo "Successfully found Glib# root directory." PATHS=( /usr/lib/mono/gtk-sharp-2.0 /usr/lib/cli/atk-sharp-2.0 /Library/Frameworks/Mono.framework/Versions/2.8/lib/mono/gtk-sharp-2.0 ) searchpaths "Atk#" atk-sharp.dll PATHS[@] ATKDIR=$RESULT echo "Successfully found Atk# root directory." PATHS=( /usr/lib/mono/gtk-sharp-2.0 /usr/lib/cli/gdk-sharp-2.0 /Library/Frameworks/Mono.framework/Versions/2.8/lib/mono/gtk-sharp-2.0 ) searchpaths "Gdk#" gdk-sharp.dll PATHS[@] GDKDIR=$RESULT echo "Successfully found Gdk# root directory." cp bonus/load-gtk.fsx load-gtk1.fsx sed "s,INSERTGTKPATH,$GTKDIR,g" load-gtk1.fsx load-gtk2.fsx sed "s,INSERTGDKPATH,$GDKDIR,g" load-gtk2.fsx load-gtk3.fsx sed "s,INSERTATKPATH,$ATKDIR,g" load-gtk3.fsx load-gtk4.fsx sed "s,INSERTGLIBPATH,$GLIBDIR,g" load-gtk4.fsx load-gtk.fsx rm load-gtk1.fsx rm load-gtk2.fsx rm load-gtk3.fsx rm load-gtk4.fsx mv load-gtk.fsx $FSHARPDIR/load-gtk.fsx # Generate 'fsharpc' and 'fsharpi' scripts (using the F# path) # 'fsharpi' automatically searches F# root directory (e.g. load-gtk) echo "#!/bin/sh" fsharpc echo "exec mono $FSHARPDIR/fsc.exe --resident \"\$@\"" fsharpc chmod 755 fsharpc echo "#!/bin/sh" fsharpi echo "exec mono $FSHARPDIR/fsi.exe -I:\"$FSHARPDIR\" \"\$@\"" fsharpi chmod 755 fsharpi mv fsharpc $BIN/fsharpc mv fsharpi $BIN/fsharpi Thanks a lot!

    Read the article

  • Why do you hate Java? Is it the language or the framework? [closed]

    - by zneak
    According to you all, Java is the third most-hated language here. The two other most hated languages are PHP and VBScript. (It's quite funny how they stand together on the podium.) I'd like to make it known that the question mostly addresses people who don't like Java. I assume here a number of subjective opinions as facts because they're usually considered true among people who don't like Java, and I don't want to be convinced otherwise here. If you're a Java enthusiast, you might find this question frustrating. It's never been made clear if people hate Java itself, or if they hate it because of the framework, or if it's a mixture of the two. On a side you have the language, where you have: the "everything should be an object" philosophy, even in instances where it should obviously be something else (event handlers I'm pointing you); checked exceptions; the idea that all logic should be presented as methods and properties is a big no-no; the fact that "closures" created by anonymous types only include final variables and arguments, but will allow write access to any member of the parent class; a few more. On the other side, you have the JDK, with... its load of inconsistencies and overengineering; monolithic class hierarchies; meaningless base exceptions like IOException (though other frameworks have similar exception hierarchies); sluggish responsiveness even with Swing; a few more. My question is, do you think that, if either one (Java or the JDK) was taken alone, and the other was dropped in favor of something else, the new combination would be better? For instance, if you could use the C# syntax with the JDK (adapting get*/set* methods into properties, and interfaces with only one method into delegates), or the Java syntax with the .NET Framework (doing the inverse transformations), would things get better in your opinion?

    Read the article

  • MVC Portable Areas &ndash; Deploying Static Files

    - by Steve Michelotti
    This is the second post in a series related to build and deployment considerations as I’ve been exploring MVC Portable Areas: #1 – Using Web Application Project to build portable areas #2 – Conventions for deploying portable area static files #3 – Portable area static files as embedded resources As I’ve been digging more into portable areas, one of the things I’ve liked best is the deployment story which enables my *.aspx, *.ascx pages to be compiled into the assembly as embedded resources rather than having to maintain all those files separately. In traditional web forms, that was always the thing to prevented developers from utilizing *.ascx user controls across projects (see this post for using portable areas in web forms).  However, though the aspx pages are embedded, the supporting static files (e.g., images, css, javascript) are *not*. Most of the demos available online today tend to brush over this issue and focus solely on the aspx side of things. But to create truly robust portable areas, it’s important to have a good story for these supporting files as well.  I’ve been working with two different approaches so far (of course I’d really like to hear if other people are using alternatives). Scenario For the approaches below, the scenario really isn’t that important. It could be something as trivial as this partial view: 1: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %> 2: <img src="<%: Url.Content("~/images/arrow.gif") %>" /> Hello World! The point is that there needs to be careful consideration for *any* scenario that links to an external file such as an image, *.css, *.js, etc. In the example shown above, it uses the Url.Content() method to convert to a relative path. But this method won’t necessary work depending on how you deploy your portable area. One approach to address this issue is to build your portable area project with MSDeploy/WebDeploy so that it is packaged properly before incorporating into the host application. All of the *.cs files are removed and the project is ready for xcopy deployment – however, I do *not* need the “Views” folder since all of the mark up has been compiled into the assembly as embedded resources. Now in the host application we create a folder called “Modules” and deploy any portable areas as sub-folders under that: At this point we can add a simple assembly reference to the Widget1.dll sitting in the Modules\Widget1\bin folder. I can now render the portable image in my view like any other portable area. However, the problem with that is that the view results in this:   It couldn’t find arrow.gif because it looked for /images/arrow.gif and it was *actually* located at /images/Modules/Widget1/images/arrow.gif. One solution is to make the physical location of the portable configurable from the perspective of the host like this: 1: <appSettings> 2: <add key="Widget1" value="Modules\Widget1"/> 3: </appSettings> Using the <appSettings> section is a little cheesy but it could be better formalized into its own section. In fact, if were you willing to rely on conventions (e.g., “Modules\{areaName}”) then then config could be eliminated completely. With this config in place, we could create our own Html helper method called Url.AreaContent() that “wraps” the OOTB Url.Content() method while simply pre-pending the area location path: 1: public static string AreaContent(this UrlHelper urlHelper, string contentPath) 2: { 3: var areaName = (string)urlHelper.RequestContext.RouteData.DataTokens["area"]; 4: var areaPath = (string)ConfigurationManager.AppSettings[areaName]; 5:   6: return urlHelper.Content("~/" + areaPath + "/" + contentPath); With these two items in place, we just change our Url.Content() call to Url.AreaContent() like this: 1: <img src="<%: Url.AreaContent("/images/arrow.gif") %>" /> Hello World! and the arrow.gif now renders correctly:     Since we’re just using our own Url.AreaContent() rather than the built-in Url.Content(), this solution works for images, *.css, *.js, or any externally referenced files.  Additionally, any images referenced inside a css file will work provided it’s a relative reference and not an absolute reference. An alternative to this approach is to build the static file into the assembly as embedded resources themselves. I’ll explore this in another post (linked at the top).

    Read the article

  • Using JuJu with private Openstack cloud deployment?

    - by user76054
    I'm seeing a number of problems trying to use JuJu with our internally deployed Openstack cloud. Most of this appears to be centered around DNS host resolution as well as the need to deal with our company's internal HTTP proxies. Our Openstack deployment relies upon an unroutable 172.16.0.0/12 block of addresses for VLAN allocation to each project (tenant) hosted on our internal cloud. User's have the option of assigning one or more floating addresses to instances, allocated from a block of routable addresses on our internal companies LAN. Currently, Openstack doesn't register instance names with anything other than the DNSMASQ service running on the cloud controller. As such, there's no way to resolve this address through our internal DNS hierarchy (this issue has already been reported as Bug #945505). As such, even though I can bootstrap my JuJu server node, I can't connect to it with the JuJu client, since it can't resolve the local (private) network name. I am able to ssh to the node, once I've assigned it an internally routable (i.e. floating) address. Which leads to the next issue. Next, to install software on an instance running in our cloud, it must have our internal proxy address defined - either in the apt.conf file or via environment variables. Unfortunately, when bootstrapping the server node, there's no provision to pass this info into a instance via JuJu environment.yaml file (if this is even the best way to handle this issue). As a result, the bootstrap node is unable to install the required packages. I'm assuming (dangerous, I know) that the way that I've deployed Openstack in our internal environment is probably not unique. Has anyone else encountered these issues? And more importantly, are work arounds available? Regards, Ross

    Read the article

  • kubuntu 12.10 will not boot on mac 2.93Ghz intel core 2 duo

    - by Jake Sweet
    I feel like I've tried it all and nothing is changing. I've tried booting from a liveUSB, a liveDVD, and I've checked the mod5 everything matches up. I've even tried different distro's same result on all of them. Just for reference: linuxmint 13kde and Fedora 17. I've also tried changing my liveUSB building software just in case. I've tried unetbootin and Linux USB builder. Both have same results, my opinion is that it is a hardware issue since I'm having near the same result with all of these variables. So now what is actually happening? I can boot up to a screen. I say A screen because some of the ways that dvd's and usb's boot differs. Now on liveusb I'm reaching a black screen with white text. Says booting: done, then below says loading ramdrive: done, then below that it says preparing to boot kernel this may take a while and buckle in or something to that effect. Then nothing. That's it computer freezes. I've waited up to 8 hrs and still nothing. Ok for the liveDVD Everything goes according to instructions per pdf files on every distro, until linux starts. I can only run in compatibility mode. When any other option is tried the computer seems to freeze/stall/be a pain in my butt... Ok well that seems to wrap it up. Also if I'm not explaining something well, I'm sorry I can try to clear anything up. I'm not the best at descriptions. I'm leaving with a tech specs of my mac: 2.93GHz Intel Core 2 Duo, 4 GB ram, NVIDIA GeForce GT 120 graphics, bought in late 09" it's the 24" model, let me know if anymore information will help. Also thanks in advance

    Read the article

  • Is there a name for a testing method where you compare a set of very different designs?

    - by DVK
    "A/B testing" is defined as "a method of marketing testing by which a baseline control sample is compared to a variety of single-variable test samples in order to improve response rates". The point here, of course, is to know which small single-variable changes are more optimal, with the goal of finding the local optimum. However, one can also envision a somewhat related but different scenario for testing the response rate of major re-designs: take a baseline control design, take one or more completely different designs, and run test samples on those redesigns to compare response rates. As a practical but contrived example, imagine testing a set of designs for the same website, one being minimalist "googly" design, one being cluttered "Amazony" design, and one being an artsy "designy" design (e.g. maximum use of design elements unlike Google but minimal simultaneously presented information, like Google but unlike Amazon) Is there an official name for such testing? It's definitely not A/B testing, since the main component of it (finding local optimum by testing single-variable small changes that can be attributed to response shift) is not present. This is more about trying to compare a set of local optimums, and compare to see which one works better as a global optimum. It's not a multivriable, A/B/N or any other such testing since you don't really have specific variables that can be attributed, just different designs.

    Read the article

  • Transient VO : Powerful J2EE Design Pattern

    - by Vijay Mohan
    We had a use-case wherein, the communication has to happen between regions residing under differenet taskfows. Essentially, they had a common set of parameters to be used. Initially, we resorted to the  use of pageFlowScope variables, but they are tightly coupled with the individual task flows. So, how the communication has to happen..?Some of the alternatives that we brainstormed into are - 1.usage of adf contextual event - This is a powerful feature indeed for such use-cases, but there is a considerable cost involved with it. So, before resorting to it, you have to make sure that you have good enough reason to use it.It actually does a server roundtrip and also the issue of an event and listening part to it is also something which requires your attention !!2.Use a transientVO with shared data control scope - with shared data control scope, the transient VO rows would be persistent across the task flows in your application. All you have to do is to create the attributes in the transientVO(prefereably with the same names - for the ease of conversion) and create some utility methods in VOImpl for creating row, updating row and deleting a row. You also have to make sure that the vo row is initialized per http request( this you can do in a bookmark method of your index.jspx - residing in adfc-config.xml), else the ui fields binded to the transient vo attributes won't render in UI.Hope, this helps and this should be a common use-case across apps.

    Read the article

  • Input handling between game loops

    - by user48023
    This may be obvious and trivial for you but as I am a newbie in programming I come with a specific question. I have three loops in my game engine which are input-loop, update-loop and render-loop. Update-loop is set to 10 ticks per second with a fixed timestep, render-loop is capped at around 60 fps and the input-loop runs as fast as possible. I am using one of the Javascript frameworks which provide such things but it doesn't really matter. Let's say I am rendering a tile map and the view of which elements are rendered depends on camera-like movement variables which are modified during key pressing. This is only about camera/viewport and rendering, no game physics involved here. And now, how can I handle input events among these loops to keep consistent engine reaction? Am I supposed to read the current variable modified with input and do some needed calculations in a update-loop and share the result so it could be interpolated in a render-loop? Or read the input effect directly inside the render-loop and put needed calculations inside? I thought interpreting user input inside an update-loop with a low tick rate would be inaccurate and kind of unresponsive while rendering with interpolation in the final view. How it is done properly in games overall?

    Read the article

  • One page using querystring or many folders and pages?

    - by ClarkeyBoy
    I have an application where I have the 'core' code in one folder for which there is a virtual directory in the root, such that I can include any core files using /myApp/core/bla.asp. I then have two folders outside of this with a default.asp which currently use the querystring to define what page should be displayed. One page is for general users, the other will only be accessible to users who have permission to manage users / usergroups / permissions. The core code checks the querystring and then checks the permissions for that user. An example of this as it is now is default.asp?action=view&viewtype=list&objectid=server. I am not worried about SEO as this is an internal app and uses Windows Auth. My question is, is it better the way it is now or would it be better to have something like the following: /server/view/list/ /server/view/?id=123 /server/create/ /server/edit/?id=123 /server/remove/?id=123 In the above folders I would have a home page which defines all the variables which are currently determined by the querystring - in /server/create/ for example, I would define the action as 'create', object name as 'server' and so on. In terms of future development, I really have no idea which method would be best. I think the 2nd method would be best in terms of following what page does what but this is such a huge change to make at this stage that I would really like some opinions, preferably based on experience. PS Sorry if the tags are wrong - I am new to this forum and thought this was a bit too much of a discussion for StackOverflow as that is very much right / wrong answer based. I got the idea SE is more discussion based.

    Read the article

  • Access Denied

    - by Tony Davis
    When Microsoft executives wake up in the night screaming, I suspect they are having a nightmare about their own version of Frankenstein's monster. Created with the best of intentions, without thinking too hard of the long-term strategy, and having long outlived its usefulness, the monster still lives on, occasionally wreaking vengeance on the innocent. Its name is Access; a living synthesis of disparate body parts that is resistant to all attempts at a mercy-killing. In 1986, Microsoft had no database products, and needed one for their new OS/2 operating system, the successor to MSDOS. In 1986, they bought exclusive rights to Sybase DataServer, and were also intent on developing a desktop database to capture Ashton-Tate's dominance of that market, with dbase. This project, first called 'Omega' and later 'Cirrus', eventually spawned two products: Visual Basic in 1991 and Access in late 1992. Whereas Visual Basic battled with PowerBuilder for dominance in the client-server market, Access easily won the desktop database battle, with Dbase III and DataEase falling away. Access did an excellent job of abstracting and simplifying the task of building small database applications in a short amount of time, for a small number of departmental users, and often for a transient requirement. There is an excellent front end and forms generator. We not only see it in Access but parts of it also reappear in SSMS. It's good. A business user can pull together useful reports, without relying on extensive technical support. A skilled Access programmer can deliver a fairly sophisticated application, whilst the traditional client-server programmer is still sharpening his pencil. Even for the SQL Server programmer, the forms generator of Access is useful for sketching out application designs. So far, so good, but here's where the problems start; Access ties together two different products and the backend of Access is the bugbear. The limitations of Jet/ACE are well-known and documented. They range from MDB files that are prone to corruption, especially as they grow in size, pathetic security, and "copy and paste" Backups. The biggest problem though, was an infamous lack of scalability. Because Microsoft never realized how long the product would last, they put little energy into improving the beast. Microsoft 'ate their own dog food' by using Access for Microsoft Exchange and Outlook. They choked on it. For years, scalability and performance problems with Exchange Server have been laid at the door of the Jet Blue engine on which it relies. Substantial development work in Exchange 2010 was required, just in order to improve the engine and storage schema so that it more efficiently handled the reading and writing of mails. The alternative of using SQL Server just never panned out. The Jet engine was designed to limit concurrent users to a small number (10-20). When Access applications outgrew this, bitter experience proved that there really is no easy upgrade path from Access to SQL Server, beyond rewriting the whole lot from scratch. The various initiatives to do this never quite bridged the cultural gulf between Access and a true relational database So, what are the obvious alternatives for small, strategic database applications? I know many users who, for simple 'list maintenance' requirements are very happy using Excel databases. Surely, now that PowerPivot has led the way, it is time for Microsoft to offer a new RAD package for database application development; namely an Excel-based front end for SQL Server Express. In that way, we'll have a powerful and familiar front end, to a scalable database, and a clear upgrade path when an app takes off and needs to go enterprise. Cheers, Tony.

    Read the article

  • New JavaScript Editor

    - by Petr
    I did not write a blog post here for a few weeks. I think the last my post was  about releasing NetBeans 7.1 in the beginning of January. The reason is not that I would change the job:), but that I have concentrated on new JavaScript support/editor. The new JavaScript editor is written basically from scratch. The answer for the question "Why from beginning again, why do you just improve the old one?" is not easy and the decision has more aspects. One of the main reasons is that the old support was written 4 years ago and the architecture is limited. Also during the time, the APIs were changed and it was very hard to keep the editor up to date. Also there is a license issue etc. In short, it is time to rewrite the old JS editor.  We build up strong community about the PHP support in NetBeans and because many PHP developers also write JavaScript code I would like to ask you for a help. There is a continual PHP build with the new JavaScript support. You can download the result of the builds here. It's a zip file. You can unzip the file anywhere, where you want. I recommend to run the build with the new userdir, to avoid damaging your current userdir. It shouldn't happened, but just to be sure:). You can achieve this through the switch --userdir. So start the unzipped file from command line from the folder, where you unzipped it, can be done with this command on unix: bin/netbeans.sh --userdir /path/to/new/userdir and on windows: bin\netbeans.exe --userdir D:\path\to\new\userdir For the developers who use continual php build already, it's well known. There is also full IDE build with the new JavaScript support for people, who need more than only PHP support.  Because the builds with the new JavaScript editor is created from a branch, there are not nightly builds available. They will be, when we merge the branch to the trunk, but so far we have to work only with the mentioned continual build. We will merge our branch after branching NetBeans 7.2 from trunk. This is also answer for the question, what release of NetBeans will contain the new JS support. It should be the release after NetBeans 7.2. I'm asking you whether you could play with the builds or better, could work in the builds with new JavaScript support and tell us every issue that you run in. It can be everything what doesn't fit you, something doesn't work as you expected, something is slow, you want change the behaviour of a feature etc. Your input / comments are very important for us and it will help us to achieve the new JavaScript support that you need.  The best way how to communicate issues is through our Bugzilla, because it is simple to track them. Sure you can write comment here:), but still I prefer Bugzilla for any issue. You can click here (you should be already log in Bugzilla), a form for the new JavaScript issue is opened, with pre-filled component Editor and NO72 keyword. I will write about the single features later, but now I will mentioned a few features that should work in better way than in the old support.  Syntactic and semantic colouring Navigator Mark Occurrences and GoTo Declaration  Code Completion Code Completion is invoked through keyboard shortcut CTRL+SPACE. The first invocation offers items that are found through a source model. Almost all editor features are based on the model, that is build from source code. There is a lot of work on the model yet, but it should offer better results. When the pop up window with code completion items is open and you press CTRL+SPACE again, then the code completion offers all elements that are in the project. In the pictures all elements that starts with letter 't'. Formatter with many options and more :) A few features are not still implemented that are supported in the old JavaScript support (for example jQuery support), but we are adding this features ASAP.

    Read the article

  • Bad style programming, am I pretending too much?

    - by Luca
    I realized to work in an office with a quite bad code base. The base library implemented in years and years is quite limited, and most of that code is, honestly, horrible. Projects developed in the office are very large. Fine. I could define me a "perfectionist" (but often I'm not), and I thought to refactor an application (really a portion), which need a new (complex) feature. But, today, I really realized that it's not possible to refactor that application modules with a reasonable time (say, 24/26 hours, respect the avaialable time for the task, which is 160 hours). I'm talking about (I am a bit ashamed to say) name collisions, large and frequent cut & paste code, horrible and misleading naming, makefiles without dependencies (!), application login is spread randomly across many different sources, dead code, variable aliasing, no assertion, no documentation, very long source files, bad/incomplete include file definition, (this is emblematic!) very frequent extern declaration of variables and functions, ... I'm sure to continue ... buffer overflows because sprintf, indentation (!), spacing, non existent const modifier usage. I would say that every source line was written quite randomly when needed, without keeping in mind some design (at least, the obvious one). (Am I in hell?) The problem arises when the application is developed by a colleague of mine. I felt very frustrated. So, I decided to expose the "situation" to my colleague; at the end, that was a bad idea. He is justified in saying that "the application was developed in haste, so it is natural that it is written vaguely; you are wasting time to think and implement an elegant implementation" .... I'm asking too much from my colleague to write readable code, which is managed and documented? I expect too much in not having to read thousands of lines of code to understand how a particular logic?

    Read the article

  • Translation and Localization Resources for UX Designers

    - by ultan o'broin
    Here is a handy list of translation and localization-related resources for user experience professionals. Following these will help you design an easily translatable user experience. Most of the references here are for web pages or software. Fundamentally, remember your designs will be consumed globally, and never divorce the design process from the development or deployment effort that goes into bringing your designs to life in code. Ask yourself today: Do you know how the text you are using in your designs are delivered to the customer, even in English? Key areas that UX designers always seen to fall foul of, in my space anyway, are: Terminology that is impossible to translate (jargon, multiple modifiers, gerunds) or is used inconsistently Poorly written, verbose text (really, just write well in English, no special considerations) String construction (concatenation of parts assembled dynamically) Composite widget positioning (my favourite) Hard-coded fonts, small font sizes, or character formatting or casing that doesn't work globally Format that is not separate from content Restricted real estate not allowing for text expansion in translation Forcing formatting with breaks, and hard-coding alphabetical sorting Graphics that do not work in Bi-Di languages (because they indicate directionality and can't flip) or contain embedded text. The problems of culturally offensive icons are well known by now in the enterprise applications space, though there are some dangers, such as the use of flags to indicate language, for example. Resources Internationalization Techniques: Authoring HTML & CSS Global By Design Insert Title Here : Variables in Interface Language Prose: Internationalisation Doc and help considerations I can deal with later.

    Read the article

  • creating bounding box list

    - by Christian Frantz
    I'm trying to create a list of bounding boxes for each cube drawn, so I can use the boxes to intersect with a ray that my mouse position is casting, but I have no idea how. I've created a list that stores the boxes, but how am I getting the values from each box? for (int x = 0; x < mapHeight; x++) { for (int z = 0; z < mapWidth; z++) { cubes.Add(new Vector3(x, map[x, z], z), Matrix.Identity, grass); boxList.Add(something here); } } public Cube(GraphicsDevice graphicsDevice) { device = graphicsDevice; var vertices = new List<VertexPositionTexture>(); BuildFace(vertices, new Vector3(0, 0, 0), new Vector3(0, 1, 1)); BuildFace(vertices, new Vector3(0, 0, 1), new Vector3(1, 1, 1)); BuildFace(vertices, new Vector3(1, 0, 1), new Vector3(1, 1, 0)); BuildFace(vertices, new Vector3(1, 0, 0), new Vector3(0, 1, 0)); BuildFaceHorizontal(vertices, new Vector3(0, 1, 0), new Vector3(1, 1, 1)); BuildFaceHorizontal(vertices, new Vector3(0, 0, 1), new Vector3(1, 0, 0)); cubeVertexBuffer = new VertexBuffer(device, VertexPositionTexture.VertexDeclaration, vertices.Count, BufferUsage.WriteOnly); cubeVertexBuffer.SetData<VertexPositionTexture>(vertices.ToArray()); } There aren't any clearly defined variables for the bounds of each cube created, so where do I create the bounding box from?

    Read the article

  • After upgrading to 12.04 from 10.10 my mythbuntu standard MCEUSB remote no longer works

    - by keepitsimpleengineer
    I had no problems using my Windows Media Center Remote with 10.10 Mythbuntu, but after upgrading, it no longer affects Mythbuntu. I have verified and re-installed it in Mythbuntu Control Centre. I have used irw to verify the ir buttons actions are properly received by the HTPC. How do I go about fixing this? 3.2.0-26-generic (#41-Ubuntu SMP Thu Jun 14 17:49:24 UTC 2012) Xorg version: 1.11.3 (16 July 2012 08:06:31PM) GCC: 4.6 (x86_64-linux-gnu) Current updates as of 2012?07?21 $cat /etc/lirc/hardware.con #Chosen Remote Control REMOTE="Windows Media Center Transceivers/Remotes (all)" REMOTE_MODULES="lirc_dev mceusb" REMOTE_DRIVER="" REMOTE_DEVICE="/dev/lirc0" REMOTE_SOCKET="" REMOTE_LIRCD_CONF="mceusb/lircd.conf.mceusb" REMOTE_LIRCD_ARGS="" #Chosen IR Transmitter TRANSMITTER="None" TRANSMITTER_MODULES="" TRANSMITTER_DRIVER="" TRANSMITTER_DEVICE="" TRANSMITTER_SOCKET="" TRANSMITTER_LIRCD_CONF="" TRANSMITTER_LIRCD_ARGS="" #Enable lircd START_LIRCD="true" #Don't start lircmd even if there seems to be a good config file #START_LIRCMD="false" #Try to load appropriate kernel modules LOAD_MODULES="true" # Default configuration files for your hardware if any LIRCMD_CONF="" #Forcing noninteractive reconfiguration #If lirc is to be reconfigured by an external application #that doesn't have a debconf frontend available, the noninteractive #frontend can be invoked and set to parse REMOTE and TRANSMITTER #It will then populate all other variables without any user input #If you would like to configure lirc via standard methods, be sure #to leave this set to "false" FORCE_NONINTERACTIVE_RECONFIGURATION="false" START_LIRCMD="" # lsusb | grep -i infrared Bus 003 Device 002: ID 0471:0815 Philips (or NXP) eHome Infrared Receiver

    Read the article

  • DotNetNuke 7.0 Only Weeks Away!

    - by sbwalker
    The software industry moves at a lightning pace, and it is only through constant focus and continuous investment that a software product can remain both stable and relevant over the long term. As we approach the 10 Year Anniversary of the DotNetNuke platform, it seems only fitting that we are on the verge of announcing yet another significant product milestone. DotNetNuke 7.0 is just around the corner and represents a bold step forward for our Content Management Platform, including substantial business productivity enhancements, investments in web platform relevance, and a significant overhaul and modernization of the user interface and user experience. It has been five months since I posted the announcement that the next major version of the platform was going to be DotNetNuke 7.0.  This announcement created tremendous excitement and anticipation in the DotNetNuke community, as major version increments have always been utilized as an opportunity  to introduce revolutionary new product features and capabilities. After months of intense product development, the finish line is finally in sight. With that, I am pleased to announce that we released a Release Candidate (RC) of DotNetNuke 7.0 yesterday. You can download the RC from our project page on Codeplex. A Release Candidate represents a software version which is very near to “release” quality. So although we will not be officially endorsing the RC for production use, or providing an official upgrade path, it does represent a significant milestone in our software development efforts ( if you are looking for a more detailed explanation of our software release terminology, I would encourage you to read the blog written by Co-Founder, Joe Brinkman titled "What's In A Name?" ). Modernizing a software platform does have its share of challenges from a backward compatibility perspective and, as usual, we are taking great care in ensuring a seamless upgrade path for our customers. In order to remain relevant and progressive, you need to be aware that DotNetNuke 7.0 has adopted a new set of baseline infrastructure requirements including ASP.NET 4.0.  As a result we are encouraging all major stakeholders in the ecosystem ( module developers, designers, partners, customers, etc... ) to take the opportunity to install the RC in their own local environments. This is the last opportunity to let us know about any final issues which may need to be addressed prior to final release. Mark your calendars now… the expected public release date (RTM) for DotNetNuke 7.0 will be Wednesday, November 28th. On a side note, we expect to release a 6.2.5 Maintenance version today. This release contains some high priority product quality improvements as well as security patches for some vulnerabilities reported through our standard ecosystem channels. As a result we will be encouraging all of our customers to upgrade to the 6.2.5 release as soon as it is available. I hope everyone is as excited as I am about the upcoming DotNetNuke 7.0 release. Please take the opportunity over the next week to put the new platform through its paces. Remember, only through our collective efforts can we ensure that this release has the greatest market impact of any DotNetNuke release to date.

    Read the article

  • Getting Started with StreamInsight 2.1

    - by Roman Schindlauer
    If you're just beginning to get familiar with StreamInsight, you may be looking for a way to get started. What are the basics? How can I get my first StreamInsight application running so I can see how it works? Where is the 'front door' that will get me going? If that describes you, then this blog entry might be just what you need. If you're already a StreamInsight wiz, keep reading anyway - you may find some helpful links here that you weren't aware of. But here's what we'd like from you experienced readers in particular: if you know of other good resources that we missed, please feel free to add them in the comments below. We appreciate you sharing your expertise. The Book The basic documentation for StreamInsight is located in the MSDN Library (Microsoft StreamInsight 2.1). You'll notice that previous versions of StreamInsight are still there (1.2 and 2.0), but if you're just getting started you can stick to the 2.1 section. The documentation has been organized to function as reference material, which is fine after you're familiar with the technology. But if you're trying to learn the basics, you might want to take a different path instead of just starting at the top. The following is one map you can use. What Is StreamInsight? Here is a sequence of topics that should give you a good overview of what StreamInsight is and how it works: Overview answers the question, "what is it?" StreamInsight Server Architecture gives you a quick look at a high-level architectural drawing StreamInsight Concepts lays out an overview of the basic components Deploying StreamInsight Entities to a StreamInsight Server describes the mechanics of how these components work together Getting an Example Running Once you have this background, go ahead and install StreamInsight and get a basic example up and running: Installation download and install the software StreamInsight Examples walk through a set of 3 simple StreamInsight applications that work together to demonstrate what you learned in the topics above; you can copy and paste the code into Visual Studio, compile, and run That's it - you now have a real, functioning StreamInsight system! Now that you have a handle on the basics, you might want to start digging deeper. Digging Deeper Here's a suggested path through the documentation to help you understand the next layer of StreamInsight technologies: Using Event Sources and Event Sinks sources supply data and sinks consume it; this topic gives you an overview of how they work Publishing and Connecting to the StreamInsight Server practical details on how to set up a StreamInsight server A Hitchhiker’s Guide to StreamInsight 2.1 Queries queries are the heart of how StreamInsight performs data analytics, and this whitepaper will help you really understand how they work Using StreamInsight LINQ root through this section for technical details on specific query components Using the StreamInsight Event Flow Debugger in addition to troubleshooting, the debugger is a great way to learn more about what goes on inside a StreamInsight application And Even Deeper Finally, to get a handle on some of the more complex things you can do with StreamInsight, dig into these: Input and Output Adapters adapters can be useful for handling more complex sources and sinks Building Resilient StreamInsight Applications a resilient application is able to recover from system failures Operations this section will help you monitor and troubleshoot a running StreamInsight system The StreamInsight Community As you're designing and developing your StreamInsight solutions, you probably will find it helpful to see working examples or to learn tips and tricks from others. Or maybe you need a place to post a vexing question. Here are some community resources that we have found useful. If you know of others, please add them in the comments below. Code samples and tools Official StreamInsight code samples Introduction to LinqPad Driver for StreamInsight 2.1 - LinqPad is a very useful tool for developing queries The following case studies are based on earlier versions of StreamInsight, but they still are useful examples: Microsoft Media Analytics - real-time monitoring and analytic Edgenet - responding to information from multiple source ICONICS - managing energy usage Blogs Microsoft StreamInsight Ruminations of J.net Richard Seroter's Architecture Musings pluralsight Forums MSDN StreamInsight Forum stackoverflow Training Microsoft StreamInsight Fundamentals (“Introducing StreamInsight” is free) from pluralsight Twitter @streaminsight   You’re a StreamInsight Expert That should get you going. Please add any other resources you have found useful in the comments below.   Regards, The StreamInsight Team

    Read the article

  • Why I don't use SSIS checkpoint files

    - by jamiet
    In a recent discussion in regard to general ETL best practises the subject of checkpoint files as a means for package restartability came up and I stated that I was dead against using them. For anyone that may care, here is why: Configuring them is distinctly unintuitive (that's a matter of opinion but if you follow the link I'll wager that you will agree) they don't make any allowance for loop iterations they cannot store variables of type Object they are limited in ability. There are many scenarios where you may want to execute certain containers regardless of whether the package is started from a checkpoint file but the current usage model does not allow for this. they are ignored by eventhandlers, which wouldn't be so bad if there were a way to toggle this behaviour in certain scenarios they dont work properly I'll expand on the last bullet point. I have encountered situations where the behaviour for tasks executing concurrently is unpredictable. That is, sometimes the completion of a task that executes concurrently with a failed/failing task will make it into the checkpoint file and sometimes it won't. This is near-impossible to reproduce but it does happen as my good friend John Welch will hopefully concur (if he is reading). Is anyone out there making successful use of checkpoint files within SSIS? I would be interested in knowing about that if so. @Jamiet

    Read the article

< Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >