Search Results

Search found 3534 results on 142 pages for 'sets'.

Page 53/142 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Algorithm to figure out appointment times?

    - by Rachel
    I have a weird situation where a client would like a script that automatically sets up thousands of appointments over several days. The tricky part is the appointments are for a variety of US time zones, and I need to take the consumer's local time zone into account when generating appointment dates and times for each record. Appointment Rules: Appointments should be set from 8AM to 8PM Eastern Standard Time, with breaks from 12P-2P and 4P-6P. This leaves a total of 8 hours per day available for setting appointments. Appointments should be scheduled 5 minutes apart. 8 hours of 5-minute intervals means 96 appointments per day. There will be 5 users at a time handling appointments. 96 appointments per day multiplied by 5 users equals 480, so the maximum number of appointments that can be set per day is 480. Now the tricky requirement: Appointments are restricted to 8am to 8pm in the consumer's local time zone. This means that the earliest time allowed for each appointment is different depending on the consumer's time zone: Eastern: 8A Central: 9A Mountain: 10A Pacific: 11A Alaska: 12P Hawaii or Undefined: 2P Arizona: 10A or 11A based on current Daylight Savings Time Assuming a data set can be several thousand records, and each record will contain a timezone value, is there an algorithm I could use to determine a Date and Time for every record that matches the rules above?

    Read the article

  • Tips for switching jobs and moving into web based programming?

    - by JerryC
    I graduated in 2006 with a computer science degree and got solid grades (3.5 overall 3.8 in my major) For the past 4.5 years I've been working as a Software Engineer doing primarily rich client development. Most of my experience is with Java, Swing and C++. I've done a lot of network programming and I have acquired some skill working & debugging in distributed environments. I would like to switch jobs and move into a role where I can get exposure to some new technologies and frameworks. I would like to move into a more web development role but I find my lack of web development experience is hurting me. 90% of the jobs I see advertised are looking for one of two skill sets: 1) Stereotypical server side Java web developer. Experience with Spring, Hibernate, J2EE, etc. 2) Stereotypical front end web developer. Experience with Javascript, jQuery, HTML5, GWT, CSS, etc I find most of these companies are looking really specifically for this experience and they are not willing to take on good programmers/ CS fundamental guys who lack experience with this stuff. I would love to get a job doing stuff like this, but have my skills become out of date and unmarketable? Any opinions on ways to sell myself to help get a new position?

    Read the article

  • Packages not showing up in created APT repository

    - by David
    I created an APT repository using deb-scanpackages, and it seemed to go well. When I did a apt-get update on another server, the Packages.gz file was retrieved, and all seemed well - until I went to search for the packages contained in that repository (all packages are created locally). Several recommendations suggested reprepro; I tried that. Same result - except I had to rebuild the packages with the Priority and Section lines in the control file (nothing says this anywhere). The reprepro utility also generates a complicated directory structure which required rewriting the repository entry on the requesting server. I then found that the arch directory referenced i386 and not amd64 (which was requested by the requesting server). Is it possible that the AMD64 system isn't seeing packages compiled for i386? Searching the *Packages files in /var/lib/apt/lists show that the only packages for i386 are those I added (the other files are for the server - Ubuntu 10.04.2 LTS). The server the packages were built on is Ubuntu 10.04.2 LTS i686; the requesting server is x86_64. I found some discussion at the Debian AMD64FAQ but it claims to be obsolete. It makes mention of an extended syntax for repository listings for APT, and a command dpkg-subarchitecture - neither of which work on the local AMD64 server. Do I have to build two different sets of packages?

    Read the article

  • Developing Ext JS Charts in NetBeans IDE

    - by Geertjan
    I took my first tentative steps into the world of Ext JS charts today, in NetBeans IDE 7.4. Click to enlarge the image. I will make a screencast soon showing how charts such as the above can be created with NetBeans IDE and Ext JS. Setting up Ext JS is easy in NetBeans IDE because there's a JavaScript library browser, by means of which I can browse for the Ext JS libraries that I need and then NetBeans IDE sets up the project for me. The JavaScript code shown above comes directly from here: http://www.quizzpot.com/courses/learning-ext-js-3/articles/chart-series The index.html is as follows: <html> <head> <title>TODO supply a title</title> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width"> <link rel="stylesheet" href="js/libs/extjs/resources/css/ext-all.css"/> <script src="js/libs/ext-core/ext-core.js"></script> <script src="js/libs/extjs/adapter/ext/ext-base-debug.js"></script> <script src="js/libs/extjs/ext-all-debug.js"></script> <script src="app.js"></script> </head> <body> </body> </html> More info on Ext JS: http://docs.sencha.com/extjs/4.1.3/ By the way, quite a few other articles are out there on Ext JS and NetBeans IDE, such as these, which I will be learning from during the coming days: http://netbeans.dzone.com/extjs-rest-netbeans http://netbeans.dzone.com/articles/create-your-first-extjs-4 http://netbeans.dzone.com/articles/mixing-extjs-json-p-and-java

    Read the article

  • What to choose for a multilingual site with support for Markdown and commenting

    - by Kent
    I want to publish articles at a multilingual site. I want to be able to write an article in two languages and have them available on separate URLs: thesite.foo/english-breakfast thesite.com/engelsk-frukost If the users web browser is set to English I'd like to show a small notice at the top of the Swedish version with a link to the English one. The link should have an appropriate rel attribute for a translation (search for hreflang at http://diveintohtml5.org/semantics.html). There should be a way to list all articles belonging to these sets: Swedish only, English only, Swedish versions + English only, English versions + Swedish only. I'd like to publish these as four RSS-feeds. And I would like to have two versions of the main site, one in Swedish (showing Swedish versions + English only) and one in English (showing English versions). I shall be able to write the articles using Markdown, as that is the formatting language I find most convenient. There should be a way for users to comment. And some kind of way for me to protect myself against comment spam. I am leaning towards learning Drupal. I suspect I'll have to code this behavior myself as a module. To be frank I'd rather work with Java. Is Drupal the way to go? Or is there something more suitable for this project?

    Read the article

  • What is the correct way to restart udev in Ubuntu?

    - by zerkms
    I've changed the name of my eth1 interface to eth0. How to ask udev now to re-read the config? service udev restart and udevadm control --reload-rules don't help. So is there any valid way except of rebooting? (yes, reboot helps with this issue) UPD: yes, I know I should prepend the commands with sudo, but either one I posted above changes nothing in ifconfig -a output: I still see eth1, not eth0. UPD 2: I just changed the NAME property of udev-rule line. Don't know any reason for this to be ineffective. There is no any error in executing of both commands I've posted above, but they just don't change actual interface name in ifconfig -a output. If I perform reboot - then interface name changes as expected. UPD 3: let I explain all the case better ;-) For development purposes I write some script that clones virtual machines (VirtualBox-driven) and pre-sets them up in some way. So I perform a command to clone VM, start it and as long as network interface MAC is changed - udev adds the second rule to network persistent rules. Right after machine is booted for the first time there are 2 rules: eth0, which does not exist, as long as it existed in the original VM image MAC eth1, which exists, but all the configuration in all files refers to eth0, so it is not that good for me So I with sed delete the line with eth0 (it is obsolete and useless in cloned image) and replace eth1 with eth0. So currently I have valid persistent rule, but there is still eth1 in /dev. The issue: I don't want to reboot the machine (it will take another time, which is not good thing on building-VM-stage) and just want to have my /dev rebuilt with some command so I have ready-to-use VM without any reboots.

    Read the article

  • OBIEE Version 11.1.1.7.140527 Now Released

    - by Lia Nowodworska - Oracle
    (in via Martin) The Oracle Business Intelligence Enterprise Edition (OBIEE) 11g 11.1.1.7.140527 Bundle Patch is now available to download via My Oracle Support | Patches & Updates. This is provided as single Bundle Patch  Patch  18507268 and is comprised of the following: Patch 16913445 - 1 of 8 Oracle BI Installer (BIINST) Patch 18507640 - 2 of 8 Oracle BI Publisher (BIP) Patch 18657616 - 3 of 8 EPM Components Installed from BI Installer 11.1.1.7.0 (BIFNDNEPM) Patch 18507802 - 4 of 8 Oracle BI Server (BIS) Patch 18507778 - 5 of 6 Oracle BI Presentation Services (BIPS) Patch 17300045 - 6 of 8 Oracle Real-Time Decisions (RTD) Patch 16997936 - 7 of 8 Oracle BI ADF Components (BIADFCOMPS) Patch 18507823 - 8 of 8 Oracle BI Platform Client Installers and MapViewer NOTE: Also required to be downloaded: Patch 16569379 - Dynamic Monitoring Service patch This patch set is available for all customers who are using Oracle Business Intelligence Enterprise Edition 11.1.1.7.0, 11.1.1.7.1, 11.1.1.7.131017, 11.1.1.7.140114, 11.1.1.7.140225 and 11.1.1.7.140415 NOTE: It is also available for Exalytics customers who have applied the Exalytics PS3 patch. For more information refer to: OBIEE 11g 11.1.1.7.140527 Bundle Patch is Available for OBIEE ( Doc ID 1676798.1 ) The OBIEE Suite Bundle Patches are cumulative - the content of the previous 11.1.1.7.x bundle patches are included in this latest bundle patch. Ensure to review the Readme documentation for further important patch information.  This is available via the My Oracle Support | Patches & Updates screen when downloading. Keep up to-date with the latest OBIEE Patches and Patch Set Updates by visiting OBIEE 11g: Required and Recommended Patches and Patch Sets (Doc ID 1488475.1 )

    Read the article

  • Calculating collision force with AfterCollision/NormalImpulse is unreliable when IgnoreCCD = false?

    - by Michael
    I'm using Farseer Physics Engine 3.3.1 in a very simple XNA 4 test game. (Note: I'm also tagging this Box2D, because Farseer is a direct port of Box2D and I will happily accept Box2D answers that solve this problem.) In this game, I'm creating two bodies. The first body is created using BodyFactory.CreateCircle and BodyType.Dynamic. This body can be moved around using the keyboard (which sets Body.LinearVelocity). The second body is created using BodyFactory.CreateRectangle and BodyType.Static. This body is static and never moves. Then I'm using this code to calculate the force of collision when the two bodies collide: staticBody.FixtureList[0].AfterCollision += new AfterCollisionEventHandler(AfterCollision); protected void AfterCollision(Fixture fixtureA, Fixture fixtureB, Contact contact) { float maxImpulse = 0f; for (int i = 0; i < contact.Manifold.PointCount; i++) maxImpulse = Math.Max(maxImpulse, contact.Manifold.Points[i].NormalImpulse); // maxImpulse should contain the force of the collision } This code works great if both of these bodies are set to IgnoreCCD=true. I can calculate the force of collision between them 100% reliably. Perfect. But here's the problem: If I set the bodies to IgnoreCCD=false, that code becomes wildly unpredictable. AfterCollision is called reliably, but for some reason the NormalImpulse is 0 about 75% of the time, so only about one in four collisions is registered. Worse still, the NormalImpulse seems to be zero for completely random reasons. The dynamic body can collide with the static body 10 times in a row in virtually exactly the same way, and only 2 or 3 of the hits will register with a NormalImpulse greater than zero. Setting IgnoreCCD=true on both bodies instantly solves the problem, but then I lose continuous physics detection. Why is this happening and how can I fix it? Here's a link to a simple XNA 4 solution that demonstrates this problem in action: http://www.mediafire.com/?a1w242q9sna54j4

    Read the article

  • Don't Miss the Social Engagement Center -- See How Social Cloud Tools Can Work for You

    - by Oracle OpenWorld Blog Team
    Are you ready to get social at Oracle OpenWorld? Stop by the Oracle Social Engagement Center in Moscone South Upper Lobby (near the South Meetup location) and see Oracle Cloud Social Services in action. Ask Oracle's social experts how they're using next-generation enterprise social tools to deliver extreme engagement. Watch in near real-time as Oracle reaches out to inform, inspire, and engage global communities. We're showing: -     Collective Intellect for specific data sets on 2 large screens-     Vitrue analytics and Vitrue publishing on 2 large screens-     Relative Twitter activity across the hash tags #OOW, #OOW12, #openworld, #oracle, and accounts @oracle, and @openworld on 1 large screenPlus we have 5 computers where we're actively working with the Collective Intellect and Vitrue technologies, so you can how they function. So come visit the Social Engagement Center to learn how Oracle is using and engaging with these tools.  And don't forget the Social Plaza @ OpenWorld event on Tuesday from noon - 8:00 p.m. Join us for food, drink, the afternoon keynote, and some cool libations on a hot afternoon.

    Read the article

  • Beyond S&OP: Integrated Business Planning

    - by Paul Homchick
    In most corporations, planning is done at the department level — leaving disconnects and gaps across different departments. Finance sets revenue and profit goals with minimum validation from Manufacturing that the company has the resources, material, capacity, or demand to reach these goals. On the operations side, Manufacturing is developing plans to balance demand and supply but seldom knows if the resulting "plan" will meet the budgets on which the company's revenue and profit goals are based. The Sales department agrees to quotas that meet Finance's revenue goals without a complete understanding of what manufacturing can deliver. Integrated Business Planning (IBP) bridges these gaps in corporate planning systems. Integrated Business Planning integrates the financial planning provided by EPM systems with operations planning provided by Sales and Operations Planning solutions. This means that revenue goals and budgets are validated against a bottom-up operating plan, and that the operating plan is reconciled against financial goals. When detailed changes are made to the operations plan, planners can immediately see the big picture impact of the changes. IBP also addresses one the CFO's big concerns—the reliability of the revenue forecast. Operating plans are updated daily or weekly from a precise forecast based on current market conditions. These updated plans are then made available so that financial analysts are working with data that best represents what is going to happen - not what they projected would happen based on last quarter's data. For a discussion in more depth, see my article: Improve Reliability of Financial Forecasts with Integrated Business Planning in Supply & Demand Chain-Executive Magazine.

    Read the article

  • Beyond S&OP: Integrated Business Planning

    - by Paul Homchick
    In most corporations, planning is done at the department level — leaving disconnects and gaps across different departments. Finance sets revenue and profit goals with minimum validation from Manufacturing that the company has the resources, material, capacity, or demand to reach these goals. On the operations side, Manufacturing is developing plans to balance demand and supply but seldom knows if the resulting "plan" will meet the budgets on which the company's revenue and profit goals are based. The Sales department agrees to quotas that meet Finance's revenue goals without a complete understanding of what manufacturing can deliver. Integrated Business Planning (IBP) bridges these gaps in corporate planning systems. Integrated Business Planning integrates the financial planning provided by EPM systems with operations planning provided by Sales and Operations Planning solutions. This means that revenue goals and budgets are validated against a bottom-up operating plan, and that the operating plan is reconciled against financial goals. When detailed changes are made to the operations plan, planners can immediately see the big picture impact of the changes. IBP also addresses one the CFO's big concerns—the reliability of the revenue forecast. Operating plans are updated daily or weekly from a precise forecast based on current market conditions. These updated plans are then made available so that financial analysts are working with data that best represents what is going to happen - not what they projected would happen based on last quarter's data. For a discussion in more depth, see my article: Improve Reliability of Financial Forecasts with Integrated Business Planning in Supply & Demand Chain-Executive Magazine.

    Read the article

  • OOP concept: is it possible to update the class of an instantiated object?

    - by Federico
    I am trying to write a simple program that should allow a user to save and display sets of heterogeneous, but somehow related data. For clarity sake, I will use a representative example of vehicles. The program flow is like this: The program creates a Garage object, which is basically a class that can contain a list of vehicles objects Then the users creates Vehicles objects, these Vehicles each have a property, lets say License Plate Nr. Once created, the Vehicle object get added to a list within the Garage object --Later on--, the user can specify that a given Vehicle object is in fact a Car object or a Truck object (thus giving access to some specific attributes such as Number of seats for the Car, or Cargo weight for the truck) At first sight, this might look like an OOP textbook question involving a base class and inheritance, but the problem is more subtle because at the object creation time (and until the user decides to give more info), the computer doesn't know the exact Vehicle type. Hence my question: how would you proceed to implement this program flow? Is OOP the way to go? Just to give an initial answer, here is what I've came up until now. There is only one Vehicle class and the various properties/values are handled by the main program (not the class) through a dictionary. However, I'm pretty sure that there must be a more elegant solution (I'm developing using VB.net): Public Class Garage Public GarageAdress As String Private _ListGarageVehicles As New List(Of Vehicles) Public Sub AddVehicle(Vehicle As Vehicles) _ListGarageVehicles.Add(Vehicle) End Sub End Class Public Class Vehicles Public LicensePlateNumber As String Public Enum VehicleTypes Generic = 0 Car = 1 Truck = 2 End Enum Public VehicleType As VehicleTypes Public DictVehicleProperties As New Dictionary(Of String, String) End Class NOTE that in the example above the public/private modifiers do not necessarily reflect the original code

    Read the article

  • Oracle Endeca Information Discovery 3.1 is Now Available

    - by p.anda
    Oracle Endeca Information Discovery (OEID) 3.1 is a major release that incorporates significant new self-service discovery capabilities for business users. These include agile data mashup, extended support for unstructured analytics, and an even tighter integration with Oracle BI This release is available for download from: Oracle Delivery Cloud Oracle Technology Network Some of the what's new highlights ... Self-service data mashup... enables access to a wider variety of personal and trusted enterprise data sources. Blend multiple data sets in a single app. Agile discovery dashboards... allows users to easily create, configure, and securely share discovery dashboards with intelligent defaults, intuitive wizards and drag-and-drop configuration. Deeper unstructured analysis ... enables users to enrich text using term extraction and whitelist tagging while the data is live. Enhanced integration with OBI... provides easier wizards for data selection and enables OBI Server as a self-service data source. Enterprise-class data discovery... offers faster performance, a trusted data connection library, improved auditing and increased data connectivity for Hadoop, web content and Oracle Data Integrator. Find out more ... visit the OEID Overview page to download the What's New and related Data Sheet PDF documents. Have questions or want to share details for Oracle Endeca Information Discovery?  The MOS Communities is a great first stop to visit and you can stop-by at MOS OEID Community.

    Read the article

  • Sort algorithms that work on large amount of data

    - by Giorgio
    I am looking for sorting algorithms that can work on a large amount of data, i.e. that can work even when the whole data set cannot be held in main memory at once. The only candidate that I have found up to now is merge sort: you can implement the algorithm in such a way that it scans your data set at each merge without holding all the data in main memory at once. The variation of merge sort I have in mind is described in this article in section Use with tape drives. I think this is a good solution (with complexity O(n x log(n)) but I am curious to know if there are other (possibly faster) sorting algorithms that can work on large data sets that do not fit in main memory. EDIT Here are some more details, as required by the answers: The data needs to be sorted periodically, e.g. once in a month. I do not need to insert a few records and have the data sorted incrementally. My example text file is about 1 GB UTF-8 text, but I wanted to solve the problem in general, even if the file were, say, 20 GB. It is not in a database and, due to other constraints, it cannot be. The data is dumped by others as a text file, I have my own code to read this text file. The format of the data is a text file: new line characters are record separators. One possible improvement I had in mind was to split the file into files that are small enough to be sorted in memory, and finally merge all these files using the algorithm I have described above.

    Read the article

  • How can I calculate a vertex normal for a hard edge?

    - by K.G.
    Here is a picture of a lovely polygon: Circled is a vertex, and numbered are its adjacent faces. I have calculated the normals of those faces as such (not yet normalized, 0-indexed): Vertex 1 normal 0: 0.000000 0.000000 -0.250000 Vertex 1 normal 1: 0.000000 0.000000 -0.250000 Vertex 1 normal 2: -0.250000 0.000000 0.000000 Vertex 1 normal 3: -0.250000 0.000000 0.000000 Vertex 1 normal 4: 0.250000 0.000000 0.000000 What I'm wondering is, how can I determine, taken as given that I want this vertex to represent a hard edge, whether its normal should be the normal of 1/2 or 3/4? My plan after I glanced at the sketch I used to put this together was "Ha! I'll just use whichever two faces have the same normal!" and now I see that there are two sets of two faces for which this is true. Is there a rule I can apply based on the face winding, angle of the adjacent edges, moon phase, coin flip, to consistently choose a normal direction for this box? For the record, all of the other polygons I plan to use will have their normals dictated in Maya, but after encountering this problem, it made me really curious.

    Read the article

  • Public Speaking in software development.

    - by mummey
    Greetings my fellow cubicle dwellers. I've found my role gradually change from "feature-maintainer" to "feature-developer". While much of the former would consist of fixing and/or updating an existing feature (and quietly grumbling about it's implementation with complete naiveté), in this new role I find: Have to communicate with immediate management to define the development requirements to turnaround the new feature Have to communicate with design to determine the user requirements of the new feature Have to communicate with QA to determine test sets for the new feature, as well as it's current state during development. Have to communicate with producers/project-managers to define remaining turnaround time as well as updates in development requirements. and finally, have to occasionally communicate with upper-management to defend the new feature and demonstrate it's minimized risk to the upcoming release. The last item is key here, and this took me a couple occasions to completely realize. In all, though, it becomes very apparent that communication skills ARE important, even or especially as such for developers who feel they 'own' the feature they're working on. All of this said, I recognize it's importance and would like to improve my skills in this area further. I enjoy one-on-one communication but find I tend to stutter a bit when speaking to any group larger than a few people I know well. Where can I find good resources to improve my own communication skills?

    Read the article

  • Mutating Programming Language?

    - by MattiasK
    For fun I was thinking about how one could build a programming language that differs from OOP and came up with this concept. I don't have a strong foundation in computer science so it might be common place without me knowing it (more likely it's just a stupid idea :) I apologize in advance for this somewhat rambling question :) Anyways here goes: In normal OOP methods and classes are variant only upon parameters, meaning if two different classes/methods call the same method they get the same output. My, perhaps crazy idea, is that the calling method and class could be an "invisible" part of it's signature and the response could vary depending on who call's an method. Say that we have a Window object with a Break() method, now anyone (who has access) could call this method on Window with the same result. Now say that we have two different objects, Hammer and SledgeHammer. If Break need to produce different results based on these we'd pass them as parameters Break(IBluntObject bluntObject) With a mutating programming language (mpl) the operating objects on the method would be visible to the Break Method without begin explicitly defined and it could adopt itself based on them). So if SledgeHammer calls Window.Break() it would generate vastly different results than if Hammer did so. If OOP classes are black boxes then MPL are black boxes that knows who's (trying) to push it's buttons and can adapt accordingly. You could also have different permission sets on methods depending who's calling them rather than having absolute permissions like public and private. Does this have any advantage over OOP? Or perhaps I should say, would it add anything to it since you should be able to simply add this aspect to methods (just give access to a CallingMethod and CallingClass variable in context) I'm not sure, might be to hard to wrap one's head around, it would be kinda interesting to have classes that adopted themselves to who uses them though. Still it's an interesting concept, what do you think, is it viable?

    Read the article

  • Comparing Dates in Oracle Business Rule Decision Tables

    - by James Taylor
    I have been working with decision tables for some time but have never had a scenario where I need to compare dates. The use case was to check if a persons membership had expired. I didn't think much of it till I started to develop it. The first trap I feel into was trying to create ranges and bucket sets. The other trap I fell into was not converting the date field to a complete date. This may seem obvious to most people but my Google searches came up with nothing so I thought I would create a quick post. I assume everyone knows how to create a decision table so I'm not going to go through those steps. The prerequisite for this post is to have a decision table with a payload that has a date field. This filed must have the date in the following format YYYY-MM-DDThh:mm:ss. Create a new condition in your decision table Right-click on the condition to edit it and select the expression builder In the expression builder, select the Functions tab. Expand the CurrentDate file and select date, and click Insert Into Expression button. In the Expression Builder you need to create an expression that will return true or false, add the operation <= after the CurrentDate.date In my scenario my date field is memberExpire, Navigate to your date field and expand, select toGregorianCalendar(). Your expression will look something like this, click OK to get back to the decision table Now its just a matter of checking if the value is true or false. Simple when you know how :-)

    Read the article

  • how to make HLSL effect just for lighning without texture mapping?

    - by naprox
    I'm new to XNA, i created an effect and just want to use lightning but in default effect that XNA create we should do texture mapping or the model appears 'RED', because of this lines of code in the effect file: float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float4 output = float4(1,0,0,1); return output; } and if i want to see my model (appear like when i use basiceffect) must do texture mapping by UV coordinates. but my model does not have UV coordinates assigned or its UV coordinates is not exported. and if i do texture mapping i got error. (i do texture mapping by this line of code in vertexshaderfunction and other necessary codes) output.UV= input.UV i have many of this models and want to work with them.(my models are in .FBX format) when i use Bassiceffect i have no problem and model appears correctly. how can i use "just" lightnings in my custom effects? and don't do texture mapping (because i have no UV coordinates in my models) and my model be look like when i use BasicEffect? if you need my complete code Here it is: http://www.mediafire.com/?4jexhd4ulm2icm2 here is inside of my Model Using BasicEffect http://i.imgur.com/ygP2h.jpg?1 and this is my code for drawing with or without BasicEffect inside of my draw() method: Matrix baseWorld = Matrix.CreateScale(Scale) * Matrix.CreateFromYawPitchRoll(Rotation.Y, Rotation.X, Rotation.Z) * Matrix.CreateTranslation(Position); foreach(ModelMesh mesh in Model.Meshes) { Matrix localWorld = ModelTransforms[mesh.ParentBone.Index] * baseWorld; foreach(ModelMeshPart part in mesh.MeshParts) { Effect effect = part.Effect; if (effect is BasicEffect) { ((BasicEffect)effect).World = localWorld; ((BasicEffect)effect).View = View; ((BasicEffect)effect).Projection = Projection; ((BasicEffect)effect).EnableDefaultLighting(); } else { setEffectParameter(effect, "World", localWorld); setEffectParameter(effect, "View", View); setEffectParameter(effect, "Projection", Projection); setEffectParameter(effect, "CameraPosition", CameraPosition); } } mesh.Draw(); } setEffectParameter is another method that sets effect parameter if i use my custom effect.

    Read the article

  • How to remove seams from a tile map in 3D?

    - by Grimshaw
    I am using my OpenGL custom engine to render a tilemap made with Tiled, using a well spread tileset from the web. There is nothing fancy going on. I load the TMX file from Tiled and generate vertex arrays and index arrays to render the tilemap. I am rendering this tilemap as a wall in my 3D world, meaning that I move around with a fly camera in my 3D world and at Z=0 there is a plane showing me my tiles. Everything is working correctly but I get ugly seems between the tiles. I've tried orthographic and perspective cameras and with either I found particular sets of parameters for the projection and view matrices where the artifacts did not show, but otherwise they are there 99% of the time in multiple patterns, depending on the zoom and camera parameters like field of view. Here's a screenshot of the artifact being shown: http://i.imgur.com/HNV1g4M.png Here's the tileset I am using (which Tiled also uses and renders correctly): http://i.imgur.com/SjjHK4q.png My tileset has no mipmaps and is set to GL_NEAREST and GL_CLAMP_TO_EDGE values. I've looked around many articles in the internet and nothing helped. I tried uv correction so the uv fall at half of the texel, rather than the end of the texel to prevent interpolating with the neighbour value(which is transparency). I tried debugging with my geometry and I verified that with no texture and a random color in each tile, I don't seem to see any seams. All vertices have integer coordinates, i.e, the first tile is a quad from (0,0) to (1,1) and so on. Tried adding a little offset both to the UV and to the vertices to see if the gaps cease to exist. Disabled multisampling too. Nothing fixed it so far. Thanks.

    Read the article

  • Design pattern for an automated mechanical test bench

    - by JJS
    Background I have a test fixture with a number of communication/data acquisition devices on it that is used as an end of line test for a product. Because of all the various sensors used in the bench and the need to run the test procedure in near real-time, I'm having a hard time structuring the program to be more friendly to modify later on. For example, a National Instruments USB data acquisition device is used to control an analog output (load) and monitor an analog input (current), a digital scale with a serial data interface measures position, an air pressure gauge with a different serial data interface, and the product is interfaced through a proprietary DLL that handles its own serial communication. The hard part The "real-time" aspect of the program is my biggest tripping point. For example, I need to time how long the product needs to go from position 0 to position 10,000 to the tenth of a second. While it's traveling, I need to ramp up an output of the NI DAQ when it reaches position 6,000 and ramp it down when it reaches position 8,000. This sort of control looks easy from browsing NI's LabVIEW docs but I'm stuck with C# for now. All external communication is done by polling which makes for lots of annoying loops. I've slapped together a loose Producer Consumer model where the Producer thread loops through reading the sensors and sets the outputs. The Consumer thread executes functions containing timed loops that poll the Producer for current data and execute movement commands as required. The UI thread polls both threads for updating some gauges indicating current test progress. Unsure where to start Is there a more appropriate pattern for this type of application? Are there any good resources for writing control loops in software (non-LabVIEW) that interface with external sensors and whatnot?

    Read the article

  • When someone deletes a shared data source in SSRS

    - by Rob Farley
    SQL Server Reporting Services plays nicely. You can have things in the catalogue that get shared. You can have Reports that have Links, Datasets that can be used across different reports, and Data Sources that can be used in a variety of ways too. So if you find that someone has deleted a shared data source, you potentially have a bit of a horror story going on. And this works for this month’s T-SQL Tuesday theme, hosted by Nick Haslam, who wants to hear about horror stories. I don’t write about LobsterPot client horror stories, so I’m writing about a situation that a fellow MVP friend asked me about recently instead. The best thing to do is to grab a recent backup of the ReportServer database, restore it somewhere, and figure out what’s changed. But of course, this isn’t always possible. And it’s much nicer to help someone with this kind of thing, rather than to be trying to fix it yourself when you’ve just deleted the wrong data source. Unfortunately, it lets you delete data sources, without trying to scream that the data source is shared across over 400 reports in over 100 folders, as was the case for my friend’s colleague. So, suddenly there’s a big problem – lots of reports are failing, and the time to turn it around is small. You probably know which data source has been deleted, but getting the shared data source back isn’t the hard part (that’s just a connection string really). The nasty bit is all the re-mapping, to get those 400 reports working again. I know from exploring this kind of stuff in the past that the ReportServer database (using its default name) has a table called dbo.Catalog to represent the catalogue, and that Reports are stored here. However, the information about what data sources these deployed reports are configured to use is stored in a different table, dbo.DataSource. You could be forgiven for thinking that shared data sources would live in this table, but they don’t – they’re catalogue items just like the reports. Let’s have a look at the structure of these two tables (although if you’re reading this because you have a disaster, feel free to skim past). Frustratingly, there doesn’t seem to be a Books Online page for this information, sorry about that. I’m also not going to look at all the columns, just ones that I find interesting enough to mention, and that are related to the problem at hand. These fields are consistent all the way through to SQL Server 2012 – there doesn’t seem to have been any changes here for quite a while. dbo.Catalog The Primary Key is ItemID. It’s a uniqueidentifier. I’m not going to comment any more on that. A minor nice point about using GUIDs in unfamiliar databases is that you can more easily figure out what’s what. But foreign keys are for that too… Path, Name and ParentID tell you where in the folder structure the item lives. Path isn’t actually required – you could’ve done recursive queries to get there. But as that would be quite painful, I’m more than happy for the Path column to be there. Path contains the Name as well, incidentally. Type tells you what kind of item it is. Some examples are 1 for a folder and 2 a report. 4 is linked reports, 5 is a data source, 6 is a report model. I forget the others for now (but feel free to put a comment giving the full list if you know it). Content is an image field, remembering that image doesn’t necessarily store images – these days we’d rather use varbinary(max), but even in SQL Server 2012, this field is still image. It stores the actual item definition in binary form, whether it’s actually an image, a report, whatever. LinkSourceID is used for Linked Reports, and has a self-referencing foreign key (allowing NULL, of course) back to ItemID. Parameter is an ntext field containing XML for the parameters of the report. Not sure why this couldn’t be a separate table, but I guess that’s just the way it goes. This field gets changed when the default parameters get changed in Report Manager. There is nothing in dbo.Catalog that describes the actual data sources that the report uses. The default data sources would be part of the Content field, as they are defined in the RDL, but when you deploy reports, you typically choose to NOT replace the data sources. Anyway, they’re not in this table. Maybe it was already considered a bit wide to throw in another ntext field, I’m not sure. They’re in dbo.DataSource instead. dbo.DataSource The Primary key is DSID. Yes it’s a uniqueidentifier... ItemID is a foreign key reference back to dbo.Catalog Fields such as ConnectionString, Prompt, UserName and Password do what they say on the tin, storing information about how to connect to the particular source in question. Link is a uniqueidentifier, which refers back to dbo.Catalog. This is used when a data source within a report refers back to a shared data source, rather than embedding the connection information itself. You’d think this should be enforced by foreign key, but it’s not. It does allow NULLs though. Flags this is an int, and I’ll come back to this. When a Data Source gets deleted out of dbo.Catalog, you might assume that it would be disallowed if there are references to it from dbo.DataSource. Well, you’d be wrong. And not because of the lack of a foreign key either. Deleting anything from the catalogue is done by calling a stored procedure called dbo.DeleteObject. You can look at the definition in there – it feels very much like the kind of Delete stored procedures that many people write, the kind of thing that means they don’t need to worry about allowing cascading deletes with foreign keys – because the stored procedure does the lot. Except that it doesn’t quite do that. If it deleted everything on a cascading delete, we’d’ve lost all the data sources as configured in dbo.DataSource, and that would be bad. This is fine if the ItemID from dbo.DataSource hooks in – if the report is being deleted. But if a shared data source is being deleted, you don’t want to lose the existence of the data source from the report. So it sets it to NULL, and it marks it as invalid. We see this code in that stored procedure. UPDATE [DataSource]    SET       [Flags] = [Flags] & 0x7FFFFFFD, -- broken link       [Link] = NULL FROM    [Catalog] AS C    INNER JOIN [DataSource] AS DS ON C.[ItemID] = DS.[Link] WHERE    (C.Path = @Path OR C.Path LIKE @Prefix ESCAPE '*') Unfortunately there’s no semi-colon on the end (but I’d rather they fix the ntext and image types first), and don’t get me started about using the table name in the UPDATE clause (it should use the alias DS). But there is a nice comment about what’s going on with the Flags field. What I’d LIKE it to do would be to set the connection information to a report-embedded copy of the connection information that’s in the shared data source, the one that’s about to be deleted. I understand that this would cause someone to lose the benefit of having the data sources configured in a central point, but I’d say that’s probably still slightly better than LOSING THE INFORMATION COMPLETELY. Sorry, rant over. I should log a Connect item – I’ll put that on my todo list. So it sets the Link field to NULL, and marks the Flags to tell you they’re broken. So this is your clue to fixing it. A bitwise AND with 0x7FFFFFFD is basically stripping out the ‘2’ bit from a number. So numbers like 2, 3, 6, 7, 10, 11, etc, whose binary representation ends in either 11 or 10 get turned into 0, 1, 4, 5, 8, 9, etc. We can test for it using a WHERE clause that matches the SET clause we’ve just used. I’d also recommend checking for Link being NULL and also having no ConnectionString. And join back to dbo.Catalog to get the path (including the name) of broken reports are – in case you get a surprise from a different data source being broken in the past. SELECT c.Path, ds.Name FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; When I just ran this on my own machine, having deleted a data source to check my code, I noticed a Report Model in the list as well – so if you had thought it was just going to be reports that were broken, you’d be forgetting something. So to fix those reports, get your new data source created in the catalogue, and then find its ItemID by querying Catalog, using Path and Name to find it. And then use this value to fix them up. To fix the Flags field, just add 2. I prefer to use bitwise OR which should do the same. Use the OUTPUT clause to get a copy of the DSIDs of the ones you’re changing, just in case you need to revert something later after testing (doing it all in a transaction won’t help, because you’ll just lock out the table, stopping you from testing anything). UPDATE ds SET [Flags] = [Flags] | 2, [Link] = '3AE31CBA-BDB4-4FD1-94F4-580B7FAB939D' /*Insert your own GUID*/ OUTPUT deleted.Name, deleted.DSID, deleted.ItemID, deleted.Flags FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; But please be careful. Your mileage may vary. And there’s no reason why 400-odd broken reports needs to be quite the nightmare that it could be. Really, it should be less than five minutes. @rob_farley

    Read the article

  • Dependency Analyser

    - by tsutha
    I have been watching 3 videos put together by SSIS team on Dependency Analyser. I am happy Microsoft have made an effort to do something about it. I have been asking for this feature since SQL 2005 TAP. Still a long way to go before they catch up with competetion. It looks like it currently supports SSIS and SQL dependencies. Release note states its still in development and support only limited sets of tasks. You still would not know the impact of dropping a column on cubes, reports etc. I hope that changes by the time RTM comes out. I am struggling to understand why it is impossible to do it across the solution. Ideally if you have a BI solution which holds DB, SSIS, SSAS, SSRS & PPS projects I would like to right click and execute a dependency analyser, stating what impact would have if I drop / rename a specific column. Has anyone else looked at it and what are your thoughts? ThanksSutha  

    Read the article

  • What to choose for a multilingual site with support for Markdown and commenting

    - by Kent
    I want to publish articles at a multilingual site. I want to be able to write an article in two languages and have them available on separate URLs: thesite.foo/english-breakfast thesite.com/engelsk-frukost If the users web browser is set to English I'd like to show a small notice at the top of the Swedish version with a link to the English one. The link should have an appropriate rel attribute for a translation (search for hreflang at http://diveintohtml5.org/semantics.html). There should be a way to list all articles belonging to these sets: Swedish only, English only, Swedish versions + English only, English versions + Swedish only. I'd like to publish these as four RSS-feeds. And I would like to have two versions of the main site, one in Swedish (showing Swedish versions + English only) and one in English (showing English versions). I shall be able to write the articles using Markdown, as that is the formatting language I find most convenient. There should be a way for users to comment. And some kind of way for me to protect myself against comment spam. I am leaning towards learning Drupal. I suspect I'll have to code this behavior myself as a module. To be frank I'd rather work with Java. Is Drupal the way to go? Or is there something more suitable for this project?

    Read the article

  • SFTP permission denied on files owned by www-data

    - by Charles Roper
    I have a pretty standard server set up running Apache and PHP. An app I am running creates files and these are owned by the Apache user www-data. Files that I upload via SFTP are owned by my own user charlesr. All files are part of the www-data group. My problem is that I cannot modify or overwrite any of the files via SFTP which are owned by www-data, even though charlesr is part of the www-data group. I can modify the files no problem via a SSH session. So I'm not sure what to do. How do I give my SFTP session permissions to modify www-data owned files? For a bit of background, these are the notes I wrote for myself when setting-up the server: Now set up permissions on `/var/www` where your files are served from by default: $ sudo adduser $USER www-data $ sudo chgrp -R www-data /var/www $ sudo chmod -R g+rw /var/www $ sudo chmod -R g+s /var/www Now log out and log in again to make the changes take hold. The previous set of commands does the following: 1. adds the current user ($USER) to the `www-data` group; 2. changes `/var/www` to belong to the `www-data` group; 3. adds read/write permissions to the group that `/var/www` belongs to; 4. sets the SGID bit on `/var/www`; this final point bears some explaining. And then I go on to explain to myself what setting the SGID bit means (i.e. all files created in /var/www become part of the www-data group automatically). Btw, nothing feels sweeter than going back and reading your own detailed notes on the what, how and why of your own server set up when trying to troubleshoot like this - I recommend it highly to all beginners like myself :-)

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >