Search Results

Search found 15903 results on 637 pages for 'mapping model'.

Page 350/637 | < Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >

  • On Reflector Pricing

    - by Nick Harrison
    I have heard a lot of outrage over Red Gate's decision to charge for Reflector. In the interest of full disclosure, I am a fan of Red Gate. I have worked with them on several usability tests. They also sponsor Simple Talk where I publish articles. They are a good company. I am also a BIG fan of Reflector. I have used it since Lutz originally released it. I have written my own add-ins. I have written code to host reflector and use its object model in my own code. Reflector is a beautiful tool. The care that Lutz took to incorporate extensibility is amazing. I have never had difficulty convincing my fellow developers that it is a wonderful tool. Almost always, once anyone sees it in action, it becomes their favorite tool. This wide spread adoption and usability has made it an icon and pivotal pillar in the DotNet community. Even folks with the attitude that if it did not come out of Redmond then it must not be any good, still love it. It is ironic to hear everyone clamoring for it to be released as open source. Reflector was never open source, it was free, but you never were able to peruse the source code and contribute your own changes. You could not even use Reflector to view the source code. From the very beginning, it was never anyone's intention for just anyone to examine the source code and make their own contributions aside from the add-in model. Lutz chose to hand over the reins to Red Gate because he believed that they would be able to build on his original vision and keep the product viable and effective. He did not choose to make it open source, hoping that the community would be up to the challenge. The simplicity and elegance may well have been lost with the "design by committee" nature of open source. Despite being a wonderful and beloved tool, Reflector cannot be an easy tool to maintain. Maybe because it is so wonderful and beloved, it is even more difficult to maintain. At any rate, we have high expectations. Reflector must continue to be able to reasonably disassemble every language construct that the framework and core languages dream up. We want it to be fast, and we also want it to continue to be simple to use. No small order. Red Gate tried to keep the core product free. Sadly there was not enough interest in the Pro version to subsidize the rest of the expenses. $35 is a reasonable cost, more than reasonable. I have read the blog posts and forum posts complaining about the time associated with getting the expense approved. I have heard people complain about the cost being unreasonable if you are a developer from certain countries. Let's do the math. How much of a productivity boost is Reflector? How many hours do you think it saves you in a typical project? The next question is a little easier if you are a contractor or a consultant, but what is your hourly rate? If you are not a contractor, you can probably figure out an hourly rate. How long does it take to get a return on your investment? The value added proposition is not a difficult one to make. I have read people clamoring that Red Gate sucks and is evil. They complain about broken promises and conflicts of interest. Relax! Red Gate is not evil. The world is not coming to an end. The sun will come up tomorrow. I am sure that Red Gate will come up with options for volume licensing or site licensing for companies that want to get a licensed copy for their entire team. Don't panic, and I am sure that many great improvements are on the horizon. Switching the UI to WPF and including a tabbed interface opens up lots of possibilities.

    Read the article

  • Video crashes with 10.10

    - by John Mahon
    I have installed both the 64bit and 32 bit versions of 10.10 on my Compaq Presario PC. I first installed the 64 bit version of the OS.The video often crashed when switching user. It also went haywire occasionally when I visited some web-sites. I read that there may be some problems with the 64 bit OS. So I installed the 32 bit version on another disk. This version seemed even less well behaved. HP's model number for the computer is SR1838NX. The hardware is listed at http://bizsupport1.austin.hp.com/bizsupport/TechSupport/Document.jsp?objectID=c00628274&lang=en&cc=us&contentType=SupportFAQ&prodSeriesId=1841793&prodTypeId=12454&printver=true#A0 I think the important info is that the chip set is "ATI Radeon Xpress 200" and the processor is "Athlon 64 (S) 3700+ 2.2 GHz" Has anyone else had video problems with similar machines? Is there a work around or an update? I have had previous versions of Ubuntu working on this machine and other flavors of Linux as well. Thanks in advance. John

    Read the article

  • Inside the IBM Selectric [Video]

    - by Jason Fitzpatrick
    The IBM Selectric was one of the best selling typewriters of the 1960s and 70s and featured a rather unique digital-binary to analog system that controlled a typeball instead of a row of type bars. Check out this video to look inside. Courtesy of Bill Hammack of Engineer Guy Video, we’re treated to a peek inside the popular typewriter model and an upclose look at how the unique typeball rotates and tilts to precisely deliver each letter. IBM Selectric Typewriter & Its Digital to Analogue Converter [Engineer Guy Video] How to Banish Duplicate Photos with VisiPic How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It?

    Read the article

  • How do you handle increasingly long compile times when working with templates?

    - by Ghita
    I use Visual Studio 2012 and he have cases where we added templates parameters to a class "just" in order to introduce a "seam point" so that in unit-test we can replace those parts with mock objects. How do you usually introduce seam points in C++: using interfaces and/or mixing based on some criteria with implicit interfaces by using templates parameters also ? One reason to ask this is also because when compiling sometimes a single C++ file (that includes templates files, that could also include other templates) results in an object file being generated that takes in the order of around 5-10 seconds on a developer machine. VS compiler is also not particularly fast on compiling templates as far as I understand, and because of the templates inclusion model (you practically include the definition of the template in every file that uses it indirectly and possibly re-instantiate that template every time you modify something that has nothing to do with that template) you could have problems with compile times (when doing incremental compiling). What are your ways of handling incremental(and not only) compile time when working with templates (besides a better/faster compiler :-)).

    Read the article

  • Deleting ASP.NET application subdirectories causes application recycle!

    - by geekrutherford
    This may not be news to most people, but was definitely a shock to me!   In the .NET 2.x framework a "feature" was implemented where by an ASP.NET application is automatically recycled if any subdirectory is deleted. This was apparently implemented to prevent stale content from appearing on a site.   The unfortunate side effect of this "feature" is that when using the "InProc" model for session management, all session data is lost if a subdirectory is deleted.   For those who progammatically may be adding/deleting directories within their application as inherent functionality, this causes a rather large problem.   The solution? Create your folder(s) which may be programmatically deleted outside of the root folder for the application. Alernatively, utilize a file based structure vs. folders since deleting files does not result in the same issue.

    Read the article

  • Why should I adopt MVC?

    - by Andrew
    I decided to get my hands wet and got the YII framework for PHP. I created my first application, then created new controller, model and view. Connected to database, got my record passed from controller to the view and printed the hello world. I am confused now. If I have to do the same thing for each page, this seems like a nightmare to me. In each controller I have to do a lot of same operations - declare variables, and pass them to views. I also need to create models for each page and this is all confusing to me. In my idea the main goal of development is to avoid duplication, but what I see here is lots and lots of duplicated code. Please advise and clarify. Maybe you could suggest a good reading about MVC and coding patterns and best practices in MVC. Because so far, it takes much more time to create a small site using MVC than using my own programming schema.

    Read the article

  • Server side random selection of players

    - by Ron
    Assuming I have a simple client-server game, where the server picks random players on a very frequent base, I was wondering what is the best way to select a random player (According to the following constraints): Solution must be high performance and highly scalable Random spread should be relatively even (meaning if I have 3 players and pick 99 times, they will all be picked 33 times more or less) Should only pick players who were active in the past X days (optional, but a big bonus) The actual DB or data model used to store players isn't an issue here, as we'll select the technology in accordance to our needs. However, high performance and scalability is (at the moment we have over 60,000 unique daily active players, and we plan on growing even more). Thanks!

    Read the article

  • How do I fool 12.10 to think it uses Unity2d in order to get the resolution my screen can use?

    - by Konstapel Kask
    On a laptop (model older) I have two screens (one built in, one external). I wish to use them both at the same time, but the settings app complains: Requested size (2944, 1080) exceeds 3D hardware limit (2048, 2048). You must either rearrange the displays so that they fit within a (2048, 2048) square or select the Ubuntu 2D session at login. The problem though, Unity 2D has been discontinued. I have forced Ubuntu to render Unity in 2D mode (with the help of echo export UNITY_LOW_GFX_MODE=1 >> .xprofile in accordance to this askubuntu post. But the error message still remains. How can one get the resolution that my computer supports? Or, how can I assure Ubuntu that it truly does use a 2D session?

    Read the article

  • ASP.NET MVC 2 Released!

    - by kaleidoscope
    ASP.NET MVC 2 Released! ASP.NET MVC 2 Features ASP.NET MVC 2 adds a bunch of new capabilities and features. Some of the new features and capabilities include: § New Strongly Typed HTML Helpers § Enhanced Model Validation support across both server and client § Auto-Scaffold UI Helpers with Template Customization § Support for splitting up large applications into ‘Areas’ § Asynchronous Controllers support that enables long running tasks in parallel § Support for rendering sub-sections of a page/site using Html.RenderAction § Lots of new helper functions, utilities, and API enhancements § Improved Visual Studio tooling support More details can be found at http://www.azurejournal.com/2010/03/aspnet-MVC-2-released/ http://www.asp.net/mvc/   Anish, S

    Read the article

  • Orthographic Projection with variable FOV

    - by cubrman
    We are building agame with orthographic view. The problem we face is the fact that with different resolution you can see different area of the game world. E.g. if you have higher resolution you can see more around you. To solve this we currently use a common scale factor that every model is scaled by, depending on resolution. But this has drawbacks when drawing shadows - I cannot set a higher view angle for the orthographic shadow camera, while when using the perspective shadow camera I get significantly worse shadow quality. So the question is is there any way to controll FOV when using orthographic projection, or, more specifically, what is the easiest way to scale the world uniformly up or down with orthographic projection matrix? I saw that in 3ds MAX you can control FOV for an orthographic camera I wonder how they implemented it.

    Read the article

  • How to create a use case diagram for board game played on PC

    - by user970696
    I'm struggling with a task as I was given to practice UML and use cases. The problem is that I should model computer version of a board game so I am unsure about a few things. obviously it does not matter if you play against the PC or another player, the actions are the same. The game is simply like tic tac toe. E.g. Actor Player ---(Place a diamond)-----include----(Check for a row)---include--(Swap players) But the game is played on the PC, so is Check for row really a use case? And the same with Swap players? Because the system would do that. On the other hand, if it was not, how could I continue?

    Read the article

  • OpenGL behaviour depending on the graphics card?

    - by Dan
    This is something that never happened to me before. I have an OpenGL code that uses GLSL shaders to texture a 3D model. The code involves a lot of GPU texture processing, blending, etc... I wanted to check how the performance of my code improves using a faster graphics card (both new and old are NVIDIA, using always the NVIDIA development drivers). But now I have found that once I run the code using the new graphics card, it behaves completely different (the final render looks wrong), probably because some blending effect is not performed correctly. I haven't really look into what has changed, but I am guessing that some OpenGL states are, by default, set different. Is this possible? Have you ever found different OpenGL/GLSL behaviour using different graphics cards? Any "fast" solution? (So far I've thought of plugging back the old one, push all OpenGL default states, and compare with the ones I initially get using the new card..)

    Read the article

  • How should I interpret these DirectX Caps Viewer values?

    - by tobi
    Briefly asking - what do the nodes mean and what the difference is between them in DirectX Caps Viewer? DXGI Devices Direct3D9 Devices DirectDraw Devices The most interesting for me is 1 vs 2. In the Direct3D9 Devices under HAL node I can see that my GeForce 8800GT supports PixelShaderVersion 3.0. However, under DXGI Devices I have DX 10, DX 10.1 and DX 11 having Shader model 4.0 (actually why DX 11? My card is not compatible with DX 11). I am implementing a DX 11 application (including d3d11.h) with shaders compiled in 4.0 version, so I can clearly see that 4.0 is supported. What is the difference between 1 and 2? Could you give me some theory behind the nodes?

    Read the article

  • Webcor Builders Coordinates Construction Schedules and Mitigates Potential Delays More Efficiently with Integrated Project Management

    - by Sylvie MacKenzie, PMP
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} With more than 40 years of commercial construction experience, Webcor Builders is a leading builder of distinguished, high-profile projects, including high-rise condominiums and hotels, laboratories, healthcare centers, and public works projects. Webcor is also known for its award-winning concrete, interior construction, historic restoration, and seismic renovation work. The company has completed more than 50 million square feet of projects to date. Considering the variety and complexity of the construction projects Webcor undertakes, an integrated project management solution is critical to ensuring optimal efficiency and completing client projects on time and on budget. The company previously used a number of scheduling systems for its various building projects. These packages provided different levels of schedule detail and required schedulers, engineers, and other employees to learn multiple systems. From an IT cost and complexity perspective, the company had to manage multiple scheduling systems and pay for multiple sets of licenses. The company looked to standardize on an enterprise project management system, and selected Oracle’s Primavera P6 Enterprise Project Portfolio Management. Webcor uses the solution’s advanced capabilities to schedule complex projects, analyze delays, model and propose multiple scenarios to demonstrate and mitigate delays and cost overruns, and process that information efficiently to deliver the scheduling precision that public and private projects require. In fact, the solution was instrumental in helping the company’s expansion into public sector projects during the recent economic downturn, and with Primavera P6 in place, it can deliver the precise schedule reporting required for large public projects. With Primavera P6 in place, the company could deliver the precise scheduling and milestone reporting capabilities required for large public projects. The solution is in managing the high-profile University of California – Berkeley Memorial Stadium project. Webcor was hired as construction manager and general contractor for the stadium renovation project, which is a fast-paced project located near the seismically active Hayward Fault Zone. Due to the University of California’s football schedule, meeting the Universities deadline for the coming season placed Webcor in a situation where risk awareness and early warnings of issues would be paramount. Webcor and the extended project team needed a solution that could instantly analyze alternate scenarios to mitigate potential delays; Primavera would deliver those answers.The team would also need to enable multiple stakeholders to use an internet-based platform to access the schedule from various locations, and model complicated sequencing requirements where swift decisions would be made to keep the project on track. The schedule is an integral part of Webcor’s construction management process for the stadium project. Rather than providing the client with the industry-standard monthly update, Webcor updates the critical path method (CPM) schedule on a weekly basis. The project team also reviews the schedule and updates weekly to confirm that progress and forecasted performance are accurate. Hired by the University for their ability to deliver in high risk environments The Webcor team was hit recently with a design supplement that could have added up to 70 days to the project. Using Oracle Primavera P6 the team sprung into action analyzing multiple “what if” scenarios to review mitigation means and methods.  Determined to make sure the Bears could take the field in the coming season the project team nearly eliminated the impact with their creative analysis in working the schedule. The total time from the issuance of the final design supplement to an agreed mitigation response was less than one week; leveraging the Oracle Primavera solution Webcor was able to deliver superior customer value With the ability to efficiently manage projects and schedules, Webcor can ensure it completes its projects on time and on budget, as well as inform clients about what changes to plans will mean in terms of delays and additional costs. Read the complete customer case study at :  http://www.oracle.com/us/corporate/customers/customersearch/webcor-builders-1-primavera-ss-1639886.html

    Read the article

  • Create association between informations

    - by Andrea Girardi
    I deployed a project some days ago that allow to extract some medical articles using the results of a questionnaire completed by a user. For instance, if I reply on questionnaire I'm affected by Diabetes type 2 and I'm a smoker, my algorithm extracts all articles related to diabetes bubbling up all articles contains information about Diabetes type 2 and smoking. Basically we created a list of topic and, for every topic we define a kind of "guideline" that allows to extract and order informations for a user. I'm quite sure there are some better way to put on relationship two content but I was not able to find them on network. Could you suggest my a model, algorithm or paper to better understand this kind of problem and that helps me to find a faster, and more accurate way to extract information for an user?

    Read the article

  • Oracle NoSQL Database Exceeds 1 Million Mixed YCSB Ops/Sec

    - by Charles Lamb
    We ran a set of YCSB performance tests on Oracle NoSQL Database using SSD cards and Intel Xeon E5-2690 CPUs with the goal of achieving 1M mixed ops/sec on a 95% read / 5% update workload. We used the standard YCSB parameters: 13 byte keys and 1KB data size (1,102 bytes after serialization). The maximum database size was 2 billion records, or approximately 2 TB of data. We sized the shards to ensure that this was not an "in-memory" test (i.e. the data portion of the B-Trees did not fit into memory). All updates were durable and used the "simple majority" replica ack policy, effectively 'committing to the network'. All read operations used the Consistency.NONE_REQUIRED parameter allowing reads to be performed on any replica. In the past we have achieved 100K ops/sec using SSD cards on a single shard cluster (replication factor 3) so for this test we used 10 shards on 15 Storage Nodes with each SN carrying 2 Rep Nodes and each RN assigned to its own SSD card. After correcting a scaling problem in YCSB, we blew past the 1M ops/sec mark with 8 shards and proceeded to hit 1.2M ops/sec with 10 shards.  Hardware Configuration We used 15 servers, each configured with two 335 GB SSD cards. We did not have homogeneous CPUs across all 15 servers available to us so 12 of the 15 were Xeon E5-2690, 2.9 GHz, 2 sockets, 32 threads, 193 GB RAM, and the other 3 were Xeon E5-2680, 2.7 GHz, 2 sockets, 32 threads, 193 GB RAM.  There might have been some upside in having all 15 machines configured with the faster CPU, but since CPU was not the limiting factor we don't believe the improvement would be significant. The client machines were Xeon X5670, 2.93 GHz, 2 sockets, 24 threads, 96 GB RAM. Although the clients had 96 GB of RAM, neither the NoSQL Database or YCSB clients require anywhere near that amount of memory and the test could have just easily been run with much less. Networking was all 10GigE. YCSB Scaling Problem We made three modifications to the YCSB benchmark. The first was to allow the test to accommodate more than 2 billion records (effectively int's vs long's). To keep the key size constant, we changed the code to use base 32 for the user ids. The second change involved to the way we run the YCSB client in order to make the test itself horizontally scalable.The basic problem has to do with the way the YCSB test creates its Zipfian distribution of keys which is intended to model "real" loads by generating clusters of key collisions. Unfortunately, the percentage of collisions on the most contentious keys remains the same even as the number of keys in the database increases. As we scale up the load, the number of collisions on those keys increases as well, eventually exceeding the capacity of the single server used for a given key.This is not a workload that is realistic or amenable to horizontal scaling. YCSB does provide alternate key distribution algorithms so this is not a shortcoming of YCSB in general. We decided that a better model would be for the key collisions to be limited to a given YCSB client process. That way, as additional YCSB client processes (i.e. additional load) are added, they each maintain the same number of collisions they encounter themselves, but do not increase the number of collisions on a single key in the entire store. We added client processes proportionally to the number of records in the database (and therefore the number of shards). This change to the use of YCSB better models a use case where new groups of users are likely to access either just their own entries, or entries within their own subgroups, rather than all users showing the same interest in a single global collection of keys. If an application finds every user having the same likelihood of wanting to modify a single global key, that application has no real hope of getting horizontal scaling. Finally, we used read/modify/write (also known as "Compare And Set") style updates during the mixed phase. This uses versioned operations to make sure that no updates are lost. This mode of operation provides better application behavior than the way we have typically run YCSB in the past, and is only practical at scale because we eliminated the shared key collision hotspots.It is also a more realistic testing scenario. To reiterate, all updates used a simple majority replica ack policy making them durable. Scalability Results In the table below, the "KVS Size" column is the number of records with the number of shards and the replication factor. Hence, the first row indicates 400m total records in the NoSQL Database (KV Store), 2 shards, and a replication factor of 3. The "Clients" column indicates the number of YCSB client processes. "Threads" is the number of threads per process with the total number of threads. Hence, 90 threads per YCSB process for a total of 360 threads. The client processes were distributed across 10 client machines. Shards KVS Size Clients Mixed (records) Threads OverallThroughput(ops/sec) Read Latencyav/95%/99%(ms) Write Latencyav/95%/99%(ms) 2 400m(2x3) 4 90(360) 302,152 0.76/1/3 3.08/8/35 4 800m(4x3) 8 90(720) 558,569 0.79/1/4 3.82/16/45 8 1600m(8x3) 16 90(1440) 1,028,868 0.85/2/5 4.29/21/51 10 2000m(10x3) 20 90(1800) 1,244,550 0.88/2/6 4.47/23/53

    Read the article

  • What is it going here in my solution?

    - by bbb
    I am a asp.net mvc programmer and if I want to start a project I do this: I make a class library named Model for my models. I make a class library named Infrastructure.Repository for database processes I make a class library named Application for business logic layer And finally I make a MVC project for the UI. But now some things are confusing me. Am I using 3-tier programming? If yes so what is n-tier programming and which one is better? If no so what is 3-tier programming? Some where I see that the tiers namings are DAL and BIZ. Which one is correct according to the naming convention?

    Read the article

  • vector collision on polygon in 3d space detection/testing?

    - by LRFLEW
    In the 3d fps in java I'm working on, I need a bullet to be fired and to tell if it hit someone. All visual objects in the game are defined through OpenGL, so the object it can be colliding with can be any drawable polygon (although they will most likely be triangles and rectangles anyways). The bullet is not an object, but will be treated as a vector that instantaneously moves all the way across the map (like the snipper riffle in Halo). What's the best way to detect/test collisions with the polygon and the vector. I have access to OpenCL, however I have absolutely no experience with it. I am very early in the developmental stage, so if you think there's a better way of going about this, feel free to tell me (I barley have a player model to collide with anyways, so I'm flexible with it). Thanks

    Read the article

  • C# WebForms and Ninject

    - by ipohfly
    I'm re-working on the design of an existing application which is build using WebForms. Currently the plan is to work it into a MVP pattern application while using Ninject as the IoC container. The reason for Ninject to be there is that the boss had wanted a certain flexibility within the system so that we can build in different flavor of business logic in the model and let the programmer to choose which to use based on the client request, either via XML configuration or database setting. I know that Ninject have no need for XML configuration, however I'm confused on how it can help to dynamically inject the dependency into the system? Imagine I have a interface IMember and I need to bind this interface to the class decided by a xml or database configuration at the launch of the application, how can I achieve that?

    Read the article

  • Samsung 700Z broken by Precise install

    - by Eric
    I tried to install Precise from DVD media and got to the point where the Ubuntu logo is on screen indicating the the installer is loading. At that point the screen went dark. After some time I rebooted the machine. To my dismay the machine would no longer start (Powers on but but no bios start up screen, I can hear initial hard drive spin which soon dies). This is a new machine (less than 2 months) and I have had no previous indications of defects before this incident. In other words, I am convinced the Ubuntu install had something to do with the failure of the machine. Has anyone else had this problem with this model machine? Is this known to happen with precise or other versions of Ubuntu?

    Read the article

  • Algorithm for perfect non-binary graph layout

    - by mariki
    I have a complex non-binary graph model. Each tree node can have multiple children&parents (a node can also have a connection to it's "brother"). A node is represented as square on screen with lines to the connected nodes. For that I want to use Draw2D and GEF libraries. The problem I am facing is the graph layout. I need a nice algorithm that can reposition the square nodes and the connections with minimum intersections and also make it symmetric as possible.

    Read the article

  • At which architecture level are you running BDD tests (e.g. Cucumber)

    - by Pete
    I have in the last year gotten quite fond of using SpecFlow (which is a .NET port of Cucumber) I have used it both to test a ASP.NET MVC application at the web layer, i.e. using browser automation, but also at the controller layer. The first gives me a higher confidence in the correctness of the application, because JavaScript is tested, and improper controller configuration is also caught. But those tests are slower to execute, and more complex to implement, than those just testing on the controller layer. My tests are full functional tests, i.e. they exercise all layers of the application, all the way down to the database. So the first thing before any scenario is that the database is cleared of data, allowing the test to assume that only data specified in the "Given" block exists. Then I see example on how to use it, where they test just exercise the model layer. So what are your experiences with these tools? Which layer of the application do you test?

    Read the article

  • How is the terrain generated in Commandos and Commandos game clones/look-alikes?

    - by teodron
    The Commandos series of games and its similar western counterpart, Desperados, use a mix of 2D and 3D elements to achieve a very pleasing and immersive atmosphere. Apart from the concept that alone made the series a best-seller, the graphics eye-candy was also a much appreciated asset of that game. I was very curious on what was the technique used to model and adorn the realistic terrains in those titles? Below are some screenshots that could be relevant as a reference for whomever has a candidate answer: The tiny details and patternless distribution of ornamental textures make me think that these terrains were not generated using a standard heightmap-blendmap method.

    Read the article

  • Removing AppPrincipals from Office365

    - by Sahil Malik
    SharePoint, WCF and Azure Trainings: more information So here is an annoying issue. If I have your AppPrincipal and secret, I can party as you! But as we go through our usual dev cycles, we create these ApplicationIDs. Hell Visual Studio will create them for us, to make things easy!The problem is, many a developer, and some a ITOgre, may leave these AppPrincipalIds sitting there and not clean them up when they are done playing. You can look for currently registered App Principals at https://yourtenant/_layouts/15/appprincipals.aspx The problem is, that URL shows you App Principals registered AND currently in use. Currently NOT in use App Principals are NOT shown on that page. The same issue applies on premises also, even though here I am talking specifically about Office 365. Getting rid of these in On-Prem is easy, just use the Object model (server side). Read full article ....

    Read the article

  • Structure of a correctly implemented JTable with TableModel and Listeners?

    - by bamboocha
    I am pretty new to Java and its JTables and this is where I am struggling at the moment. I need to create a GUI which shows me results of a sql query like SELECT * FROM tblPeople WHERE name='Doe'. My idea was to create a a JFrame which displays a JTable with all found records. Besides this, I need to also implement some code to handle when a user is double clicking a record or selecting it by using his arrow keys (additional feature: pressing 12(e.g.) should select the 12th record). What is the best way to structure my program (what classes do I need and especially where do I store my logic)? I came up with structuring it the following way: Main.java ("view") SQLConnection.java PeopleTableModel.java (only stores and returns data given by the passed ResultSet, "model" inherits from DefaultTableModel) PeopleTable.java (stores basically all my logic including KeyListener and MouseListener, "controller", inherits from JTable) Are there better ways to achieve my goals? If so, what are they?

    Read the article

< Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >