Search Results

Search found 28707 results on 1149 pages for 'writing your own'.

Page 640/1149 | < Previous Page | 636 637 638 639 640 641 642 643 644 645 646 647  | Next Page >

  • Meet up with the JCP at JavaOne Latin America

    - by Heather VanCura
    The JCP made it to JavaOne Brazil!  We had a quickie presentation earlier today on JCP.Next that was well attended.  Come to see us at@ the  OTN mini-theatre tomorrow from 12:00-12:15 pm for a quickie on participation.  Then make your way to the Mazanino Sala 12 at 12:30 pm for CON-22250.  "The Java Community Process: How You Can Make a Positive Difference" will be presented with Heather VanCura, JCP,  and Fabio Velloso, SouJava, on Thursday, 6 December, at 12:30 pm.  Find out more about how to participate in the JCP program, the JCP.Next effort and how to get involved with Adopt-a-JSR through your JUG (or on your own)!  Here is the description in Portuguese: A JCP desempenha um papel fundamental na evolução do Java. A sessão vai enfatizar o valor da transparência e participação através da JCP, Grupos de Usuários Java e do programa Adote um JSR. Vamos explorar também algumas das mudanças futuras no processo através da iniciativa JCP.Next, e explicar como você pode se envolver. Traga suas dúvidas, suas sugestões, e suas preocupações. Nós queremos ouvir de você, e incentivá-lo e facilitar a sua participação ativa no avanço da plataforma Java

    Read the article

  • What's the proper way to merge two projects in source control software

    - by Mallow
    I'm using Fossil-SCM to maintain my projects. Since I don't work in a team I usually have just a very linear branch of development: 1.0 - 1.1 - 1.2 I'm wondering what the procedure is when you have one project who's task is about to be given to a related project. And thereby rendering the first project obsolete. Although I tend to rewrite most of my code if I don't remember having already written it, I still would like to keep the code archived. And I'ld rather not have a fossil repo that just is dead. Can I merge it? Is that the proper way of handling this? For example the code was extracting data from an excel file in order to format an HTML page. Now, I've convinced my employer to move their excel spreadsheet into a database to decrease redundancy, increase efficiency and yaddy yadda. Since I can now make logical queries that don't have to jump hoops to preform using the database I won't need the extra vbs files that originally manipulated the excel file. Technically I would be porting part of the existing code into the current new project. Since it already has it's own trunk, would it be advisable to combine the trunk of a different project to this one, and how would I do that exactly?? SO I guess my tree would look like this, and I haven't seen examples of software branching that resemble this inverted tree before so I'm wondering what the norm for a situation like this?

    Read the article

  • How is"cloud computing"different from "client-server"?

    - by BellevueBob
    Watching a CEO for a new "cloud computing" company describe his company on a finance TV program today, he said something like "Cloud computing is superior to old-fashioned client-server computing". Now I'm confused. Can someone please explain what "cloud computing" means in contrast to client-server? As far as I understand it, cloud computing is more of a network services model, such that I do not own or maintain the physical hardware. The "cloud" is all the back-end stuff. But I still might have an application that communicates with that "cloud" environment. And if I run a web site presents a form that a user fills out, pushes a button on the page, and returns some report that was generated by the web server, isn't that the same as "cloud" computing? And would you not consider my web browser as the "client"? Please note my question is specific to the concept of "cloud computing" with respect to "client-server". Sorry if this is an inappropriate question for this site; it's the one closest in the Stack universe and this is my first time here. I'm an old timer, programming since mainframe days in the late 70's.

    Read the article

  • How do you properly organize a commercial game?

    - by Reactorcore
    For the past months I've been studying programming and I've finally learned how to code, but one thing that is confusing me is how to properly organize the design of a game project - code wise. The game I'm building is a pretty standard commercial game. It has the basic components of a normal game: A world, characters and items interacting with each other and all of this is run by game manager. Basically you play as a hero in a world and do stuff. Fight, explore and interact. Think of your standard adventure game that starts off with an intro, goes to the menu system, then gets into the game and back to the menu. Pretty much like 99% of any commercial game or otherwise serious game projects. Thats what I'm aiming at. The problem is: How do you properly code a commercial game architecture? How do you organize it? How do you make it not become unmaintainable spaghetti code? What specific things to keep in mind when building this, codewise? How you can help me: a) Please tell how do you code your own game projects. What is your thought-process when designing the architecture? b) Recommend books, blogs, tutorials, videos or anything else on how to organize a commercial video game. c) Give hints and tips on do's/don'ts when building a game, codewise. Please help!

    Read the article

  • What is Cloud Computing?

    - by joelvarty
    This is a question that we discuss quite often at Edentity.  It’s one of those things, kind of like “web services” where the terminology has been thrown around by a ton of people and means a lot of different things. Here’s my favorite diagram so far, which is a visual breakdown of the material presented here by NIST, visualized by the folks at Cloud Security Alliance.     What I like about this diagram is that is shows several different ways that we can differentiate our definitions of cloud computing, from the essential characteristics, or which “Broad Network Access" and “On-Demand Self-Service” (which often are used on their own to define cloud computing) are but a couple of things that help make something “cloud”. The most important section from my point of view is the middle one – the Service Models.  This represents the different ways that cloud computing can be exposed from the ground up.  It can be an Infrastructure, a Platform or a piece of Software that an end user interacts with. This is the future, folks. more late - joel

    Read the article

  • When someone deletes a shared data source in SSRS

    - by Rob Farley
    SQL Server Reporting Services plays nicely. You can have things in the catalogue that get shared. You can have Reports that have Links, Datasets that can be used across different reports, and Data Sources that can be used in a variety of ways too. So if you find that someone has deleted a shared data source, you potentially have a bit of a horror story going on. And this works for this month’s T-SQL Tuesday theme, hosted by Nick Haslam, who wants to hear about horror stories. I don’t write about LobsterPot client horror stories, so I’m writing about a situation that a fellow MVP friend asked me about recently instead. The best thing to do is to grab a recent backup of the ReportServer database, restore it somewhere, and figure out what’s changed. But of course, this isn’t always possible. And it’s much nicer to help someone with this kind of thing, rather than to be trying to fix it yourself when you’ve just deleted the wrong data source. Unfortunately, it lets you delete data sources, without trying to scream that the data source is shared across over 400 reports in over 100 folders, as was the case for my friend’s colleague. So, suddenly there’s a big problem – lots of reports are failing, and the time to turn it around is small. You probably know which data source has been deleted, but getting the shared data source back isn’t the hard part (that’s just a connection string really). The nasty bit is all the re-mapping, to get those 400 reports working again. I know from exploring this kind of stuff in the past that the ReportServer database (using its default name) has a table called dbo.Catalog to represent the catalogue, and that Reports are stored here. However, the information about what data sources these deployed reports are configured to use is stored in a different table, dbo.DataSource. You could be forgiven for thinking that shared data sources would live in this table, but they don’t – they’re catalogue items just like the reports. Let’s have a look at the structure of these two tables (although if you’re reading this because you have a disaster, feel free to skim past). Frustratingly, there doesn’t seem to be a Books Online page for this information, sorry about that. I’m also not going to look at all the columns, just ones that I find interesting enough to mention, and that are related to the problem at hand. These fields are consistent all the way through to SQL Server 2012 – there doesn’t seem to have been any changes here for quite a while. dbo.Catalog The Primary Key is ItemID. It’s a uniqueidentifier. I’m not going to comment any more on that. A minor nice point about using GUIDs in unfamiliar databases is that you can more easily figure out what’s what. But foreign keys are for that too… Path, Name and ParentID tell you where in the folder structure the item lives. Path isn’t actually required – you could’ve done recursive queries to get there. But as that would be quite painful, I’m more than happy for the Path column to be there. Path contains the Name as well, incidentally. Type tells you what kind of item it is. Some examples are 1 for a folder and 2 a report. 4 is linked reports, 5 is a data source, 6 is a report model. I forget the others for now (but feel free to put a comment giving the full list if you know it). Content is an image field, remembering that image doesn’t necessarily store images – these days we’d rather use varbinary(max), but even in SQL Server 2012, this field is still image. It stores the actual item definition in binary form, whether it’s actually an image, a report, whatever. LinkSourceID is used for Linked Reports, and has a self-referencing foreign key (allowing NULL, of course) back to ItemID. Parameter is an ntext field containing XML for the parameters of the report. Not sure why this couldn’t be a separate table, but I guess that’s just the way it goes. This field gets changed when the default parameters get changed in Report Manager. There is nothing in dbo.Catalog that describes the actual data sources that the report uses. The default data sources would be part of the Content field, as they are defined in the RDL, but when you deploy reports, you typically choose to NOT replace the data sources. Anyway, they’re not in this table. Maybe it was already considered a bit wide to throw in another ntext field, I’m not sure. They’re in dbo.DataSource instead. dbo.DataSource The Primary key is DSID. Yes it’s a uniqueidentifier... ItemID is a foreign key reference back to dbo.Catalog Fields such as ConnectionString, Prompt, UserName and Password do what they say on the tin, storing information about how to connect to the particular source in question. Link is a uniqueidentifier, which refers back to dbo.Catalog. This is used when a data source within a report refers back to a shared data source, rather than embedding the connection information itself. You’d think this should be enforced by foreign key, but it’s not. It does allow NULLs though. Flags this is an int, and I’ll come back to this. When a Data Source gets deleted out of dbo.Catalog, you might assume that it would be disallowed if there are references to it from dbo.DataSource. Well, you’d be wrong. And not because of the lack of a foreign key either. Deleting anything from the catalogue is done by calling a stored procedure called dbo.DeleteObject. You can look at the definition in there – it feels very much like the kind of Delete stored procedures that many people write, the kind of thing that means they don’t need to worry about allowing cascading deletes with foreign keys – because the stored procedure does the lot. Except that it doesn’t quite do that. If it deleted everything on a cascading delete, we’d’ve lost all the data sources as configured in dbo.DataSource, and that would be bad. This is fine if the ItemID from dbo.DataSource hooks in – if the report is being deleted. But if a shared data source is being deleted, you don’t want to lose the existence of the data source from the report. So it sets it to NULL, and it marks it as invalid. We see this code in that stored procedure. UPDATE [DataSource]    SET       [Flags] = [Flags] & 0x7FFFFFFD, -- broken link       [Link] = NULL FROM    [Catalog] AS C    INNER JOIN [DataSource] AS DS ON C.[ItemID] = DS.[Link] WHERE    (C.Path = @Path OR C.Path LIKE @Prefix ESCAPE '*') Unfortunately there’s no semi-colon on the end (but I’d rather they fix the ntext and image types first), and don’t get me started about using the table name in the UPDATE clause (it should use the alias DS). But there is a nice comment about what’s going on with the Flags field. What I’d LIKE it to do would be to set the connection information to a report-embedded copy of the connection information that’s in the shared data source, the one that’s about to be deleted. I understand that this would cause someone to lose the benefit of having the data sources configured in a central point, but I’d say that’s probably still slightly better than LOSING THE INFORMATION COMPLETELY. Sorry, rant over. I should log a Connect item – I’ll put that on my todo list. So it sets the Link field to NULL, and marks the Flags to tell you they’re broken. So this is your clue to fixing it. A bitwise AND with 0x7FFFFFFD is basically stripping out the ‘2’ bit from a number. So numbers like 2, 3, 6, 7, 10, 11, etc, whose binary representation ends in either 11 or 10 get turned into 0, 1, 4, 5, 8, 9, etc. We can test for it using a WHERE clause that matches the SET clause we’ve just used. I’d also recommend checking for Link being NULL and also having no ConnectionString. And join back to dbo.Catalog to get the path (including the name) of broken reports are – in case you get a surprise from a different data source being broken in the past. SELECT c.Path, ds.Name FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; When I just ran this on my own machine, having deleted a data source to check my code, I noticed a Report Model in the list as well – so if you had thought it was just going to be reports that were broken, you’d be forgetting something. So to fix those reports, get your new data source created in the catalogue, and then find its ItemID by querying Catalog, using Path and Name to find it. And then use this value to fix them up. To fix the Flags field, just add 2. I prefer to use bitwise OR which should do the same. Use the OUTPUT clause to get a copy of the DSIDs of the ones you’re changing, just in case you need to revert something later after testing (doing it all in a transaction won’t help, because you’ll just lock out the table, stopping you from testing anything). UPDATE ds SET [Flags] = [Flags] | 2, [Link] = '3AE31CBA-BDB4-4FD1-94F4-580B7FAB939D' /*Insert your own GUID*/ OUTPUT deleted.Name, deleted.DSID, deleted.ItemID, deleted.Flags FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; But please be careful. Your mileage may vary. And there’s no reason why 400-odd broken reports needs to be quite the nightmare that it could be. Really, it should be less than five minutes. @rob_farley

    Read the article

  • Collision Detection for a 2D RPG

    - by PHMitrious
    First of all, I have done some research on this topic before asking, and I'm asking this question as a mean to get some opinions on this topic, so I don't make a decision only on my own, but taking into account other people's experience as well. I'm starting a 2D online RPG project. I am using SFML for graphics and input and I'm creating a basic game structure and all for the game, creating modules for each part of the game. Well, let me get to the point I just wanted to give you guys some context. I want to decide on how I'm going to work with collision detection. Well I'm kinda going to work on maps with a tile map divided in layers (as usual) and add an extra 2 layers - not exactly in the map - for objects. So I'll have collisions between objects and agents (players - npcs - monsters - spells etc) and agents and tiles. The seconds one can be easily solved the first one need a little bit of work. I considered both creating a basic collision test engine using polygons and a quadtree to diminish tests since I'm going to be working with big maps with lots of objects - creating both a physical and graphical world representation. And I also considered using a physics engine like Box2D for collision tests. I think the first approach would take more work on my part but the second one would have the overhead of using a whole physics engine for just collision detection and no physics. What do you guys think ?

    Read the article

  • Windows 7 upgrade licensing

    - by bwerks
    I'm having trouble wading through Microsoft's marketing information. Does anyone know if Windows 7 x86 to Windows 7 x64 is a valid upgrade path? I know you can't actually use the built-in "upgrade" installation path; this is more of a licensing question. Although that may have answered my own question: is this idea even possible? Or do "upgrade" versions of Windows function only when executed from inside the OS, and not when doing fresh installs? Thanks!

    Read the article

  • Tools to monitor guest OS performance in vSphere

    - by Quick Joe Smith
    I am looking for some tool or way to retrieve performance data from guest VMs running under vSphere 4.1. I am currently interested in the 4 basic metrics: CPU(%), Memory(%), Disk availability(%) & Network utilisation(Kb/s). The issue I have is that all of vSphere's performance data is from a ESXi host perspective (active, shared, consumed, overhead, swapped etc.) which is far removed from the data from the VM's own perspective. For instance, I have a Windows server VM idling, using around 410MB (~25% of its allocated 2GB) as reported by Task Manager, and this is the value I'm after. vSphere's metrics seem unable to arrive at this figure by any reliable and repeatable means. Is anyone aware of tools that can obtain this kind of data? The simpler, the better.

    Read the article

  • Hack a Linksys Router into a Ambient Data Monitor

    - by Jason Fitzpatrick
    If you have a data source (like a weather report, bus schedule, or other changing data set) you can pull it and display it with an ambient data monitor; this fun build combines a hacked Linksys router and a modified toy bus to display transit arrival times. John Graham-Cumming wanted to keep an eye on the current bus arrival time tables without constantly visiting the web site to check them. His workaround turns a hacked Linksys router, a display, a modified London city bus (you could hack apart a more project-specific enclosure, of course), and a simple bit code that polls the bus schedule’s API, into a cool ambient data monitor that displays the arrival time, in minutes, of the next two buses that will pass by his stop. The whole thing could easily be adapted to another API to display anything from stock prices to weather temps. Hit up the link below for more information on the project. Ambient Bus Arrival Monitor Hacked from Linksys Router [via Make] Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos HTG Explains: What Can You Find in an Email Header?

    Read the article

  • Backup media manager, library or similar reference application

    - by Tarnschaf
    I'm looking for a backup media manager that will keep me up-to-date on where my backups are, how they're stored and what's stored on them. I want it to be able to do and keep track of the following: my used backup media (e.g. DVD1, DVD2) my backed-up assets in high-level (such as "family-photos from 2003", "laptop drivers") details of the assets ("Ninas Birthday 2003") where the backup media is currently stored when the media has been burned (to re-burn in case of media degeneration). It should be possible to navigate back and forth between media and assets. I also thought about marking assets as "deprecated". If all assets on a media are deprecated, the program should tell me so I don't have to keep it any more. Does anyone know of a program with this feature-set? Or will I have to start my own reference in something like Access?

    Read the article

  • developers-designers-testers interaction [closed]

    - by user29124
    Sorry for my bad English, and also you may not read this and waste your time, because it is just a lament of layman developer... Seems no one want to learn anything at my workplace. We have Mantis bug tracker, but our testers use google-docs for reports and only developers and team lead report bugs in Mantis. We have SVN for version control and use Smarty as template system, but our designers give us pure HTML (sometimes it's ugly for programmers, but mostly it's OK) in archives, and changes to design made by programmers go nowhere (I mean designers use their own obsolete HTML and CSS most of the time). We have a testing environment but designers don't have access with restricted accounts to it. So we can only ask them where to look for the problem and then investigate the problem by ourselves (and made changes to CSS by ourselves (that go nowhere most of the time...)). I will not mention legacy code without documentation, tests, or any requirements, just an absence of real interaction in triangle programmers-designers-testers. I'm not talking about using HAML, SASS, continuous integration, or something else, just about using basic tools by all participants of the development process. Maybe the absence of communication is not a problem in short-time projects, which will finish up in 2 months time but rather on the types of projects that lasts for years. Any comments please...

    Read the article

  • Best Amazon S3 File Manager Utility?

    - by mmacaulay
    Ever since I started using Amazon's S3 service I've been struggling to find a good solution for simple file management, without having to write my own app to browse my buckets, upload and delete files, etc. The best I've found so far to do this is S3Fox. But it's far from perfect, it has problems deleting files and folders. Comments on the Firefox plugin page indicate I'm not the only person with this problem and the developer does not respond to emails. I've looked around briefly, but couldn't find anything that looked any better than S3Fox. Please tell me there's a better way! Edit (07/26/2009): The Firefox S3Fox extension seems to be getting love from its developer again, the problems I was having before have gone away, and I'm using it on a regular basis now with no problems!

    Read the article

  • Automatically encrypting incoming email

    - by user16067
    I have a small website and would like to encrypt incoming email using gpg. Is there a way to force sendmail to check email and encrypt it if its not already encrypted? I'm using GPG on a linux server. Thanks [added]Someone asked what I hope to accomplish. My intent is for the users of the email to become more familiar with seeing thier own email encrypted and losing that fear of the unknown. The side benefit is that the email can't be looked at later down the road. If the email isn't encrypted on its way in, I'm unable to do anything about it. I'm assuming most email would be nosed around with once its already on my hard drive, so GPG would protect against those issues.

    Read the article

  • While Mail Forwarding with exim, how do I rewrite the To header with true destination address

    - by Jom
    I have mail forwarding setup with exim using a domain forwarding file. virtual_aliases_nostar: driver = redirect allow_defer allow_fail data = ${if exists{/etc/valiases/$domain}{${lookup{$local_part@$domain}lsearch{/etc/valiases/$domain}}}} file_transport = address_file group = mail pipe_transport = virtual_address_pipe retry_use_local_part domains = lsearch;/etc/localdomains unseen It is working fine. However, I would like to rewrite the "to" header. In my system filter, I would like to put something like: headers remove to headers add "To: $recipient:" I've tried: headers remove to headers add "To: $recipient:" headers remove to headers add "To: $h_env-to:" headers remove to headers add "To: $env-to:" The intent is to have the end recipient see their own email address in the To: line of their mail client. I can't seem to figure out what the correct header is for the final destination of the email so that I can put it in the to header. I've read through the Exim docs and can't seem to find it. I've also looked in the headers in an email at a mail client and can't see it there either. Any suggestions would be appreciated.

    Read the article

  • Tomcat - virtualhosting - name / ip / port - based

    - by lisak
    Hey, what are the usage scenarios for these kinds of virtual hosting ? Name Based - typical tomcat virtual hosting, one HOME instance with many contexts, each as an individual host IP based / port based - multiple instances of tomcat ( how is it with performance and memory consuption?) running on IP aliases (virtual IPs) for one network adapter, usually behind http apache server that can run name based virtual hostings. Otherwise I can't figure out how would I forward requests in iptables/firewall based on IP address, which is just one. How is IP based virtual hosting done as to Tomcat and multiple instances ? I'd like to hear some usage scenarios from your experience. How are you running your applications. Cause there are applications having it's own modified classloader and they are developed in a way to run alone withing a tomcat instance. Then there are trivial applications which can run within one instance without problems. Many thanks

    Read the article

  • How to manage long running background threads and report progress with DDD

    - by Mr Happy
    Title says most of it. I have found surprising little information about this. I have a long running operation of which the user wants to see the progress (as in, item x of y processed). I also need to be able to pause and stop the operation. (Stopping doesn't rollback the items already processed.) The thing is, it's not that each item takes a long time to get processed, it's that that there are usually a lot of items. And what I've read about so far is that it's somewhat of an anti-pattern to put something like a queue in the DB. I currently don't have any messaging system in place, and I've never worked with one either. Another thing I read somewhere is that progress reporting is something that belongs in the application layer, but it didn't go into the details. So having said all this, what I have in mind is the following. User request with list of items enters the application layer. Application layer gets some information from the domain needed to process the items. Application layer passes the items and the information off to some domain service (should the implementation of this service belong in the infrastructure layer?) This service spins up a worker thread with callbacks for both progress reporting and pausing/stopping it. This worker thread will process each item in it's own UoW. This means the domain information from earlier needs to be stored in some DTO. Since nothing is really persisted, the service should be singleton and thread safe Whenever a user requests a progress report or wants to pause/stop the operation, the application layer will ask the service. Would this be a correct solution? Or am I at least on the right track with this? Especially the singleton and thread safe part makes the whole thing feel icky.

    Read the article

  • Is there a way to set up an SMTP relay that allows users of a web app to have the web app send email

    - by mic
    the web service sends out emails on behalf of the users to their customers. So [email protected] uses webservice and webservice sends emails . The emails should be appearing as coming from [email protected]. Currently what we are trying to do is to configure webservice to act as an email client for each user, each user being able to create their own profile in which they need to configure their smtp server credentials. But given that there are more options for configurations than you can shake your stick at -not to mention trying to explain to users what info to get from where, POP b4 smtp, TLS, SSL, AUTH,etc) I am wondering if there could be a different way. How, if at all could this be approached? Can I set up a postfix server to do what I need to without running into another admin. nightmare or being blocked for spamming? Thank you for your insights

    Read the article

  • UPS for hard drive protection

    - by dimi
    I am in a place where electricity is not ideal (old house, no ground), sometimes it occasionally shuts down and supposedly there are some spikes. I consider using UPS with the goal to increase safety of my personal data. My first priority is the health of my internal and external USB hard drives which can be damaged due to possible power instability. I do not care that much about possible losses of not-saved work, instead I just want to let my system have a minimum time to turn off without any risk of physical damaging my hard drives. Would a cheap offline UPS suit my neads? Or do i need a better one with automatic voltage regulator (AVR)? How critical is AVR for the hard drives? The external ones require their own power supplies and will be plugged directly into UPS.

    Read the article

  • Is Play Framework good for doing some logic parallely?

    - by pmichna
    I'm going to build a web application that's going to host urban games. A user visits my website, clicks "Start game" and starts receiving some SMS messages when gets to some location and has to answer them to get points. My question: is Play suitable for this kind of application? From what I've read I know for sure it's ok for traditional web applications: user interface for same data storage and manipulation. But what if after clicking the "start button" some logic has to go on its own course? How would I handle parallely checking geolocation of the players (I have API for that)? I guess in some threads that would ping them every ~5 sec. and do some processing but is it possible to just "disconnect" them from the main user interface? So to sum up: I want an application written in Play that starts a separate thread for a game after clicking "start game" and other users are able to view their data (statisctics etc.), while the threads work their way with the game logic. I found something like jobs but they are documented for version 1.2 (current one is 2.2). Sorry for my somewhat fuzzy explenation, I tried to do my best.

    Read the article

  • GPL - what is distribution?

    - by Martin Beckett
    An interesting point came up on another thread about alleged misappropriation of a GPL project. In this case the enterprise software was used by some large companies who essentially took the code, changed the name, removed the GPL notices and used the result. The point was - if the company did this and only used the software internally then there isn't any distribution and that's perfectly legal under GPL. Modifications by their own employees for internal use would also be allowed. So At what point does it become a distribution? Presumably if they brought in outside contractors under 'work for hire' their modifications would also be internal and so not a distribution. If they hired an external software outfit to do modifications and those changes were only used internally by the company - would those changes be distributed? Does the GPL apply to the client or to the external developers? If the company then give the result to another department, another business unit, another company? What if the other company is a wholly owned subsidiary? ps. yes I know the answer is ask a lawyer. But all the discussion I have seen over GPL2/GPL3 distribution has been about webservices - not about internal use.

    Read the article

  • Be There: Tinkerforge/NetBeans Platform Integration Course

    - by Geertjan
    Tinkerforge is an electronic construction kit. It exposes a number of API bindings, including, of course, Java. The nice thing also is that Tinkerforge products are open source, both on the hardware and software levels, so that you can take their bases as a starting point for your own modifications. "The TinkerForge system is a set of pre-built electronics boards that are built in such a way that you can stack the boards (known as bricks), attach accessories (known as bricklets), and have your prototype and and running quickly. Unlike systems, such as the Arduino or Launchpad, the TinkerForge has to be attached to a computer and the computer does all of the work. With an easy set of application programming interfaces (APIs) available in C/C++, C#, Java, PHP, and Ruby, the system is easy to interface and program over USB in a snap." (from this useful article) Henning Krüp, who has arranged several NetBeans Platform Certified Training Courses in the past, in the Nordhorn/Lingen area in Germany, had the inspired idea to focus the next course on integration with Tinkerforge. In other words, the whole course will be focused on creating a standalone Java desktop application that leverages the NetBeans Platform to interact with Tinkerforge! Interested in joining the course or setting up something similar yourself? The course organized by Henning will be held from 19 to 21 September, as explained here, together with contact details.  If you'd like to organize a similar course at a location of your choosing, leave a comment at the end of this blog entry and we'll set something up together!

    Read the article

  • Postfix - How to configure to send these emails?

    - by Jon
    I want my mailserver to send mail from my local application "from" any user supplied email address "to" my own address, say "[email protected]". The MX records for "mysite.com" actually point to a different server, even though the outgoing mainserver is running with mydomain set as "mysite.com". Perhaps this is part of the problem? postfix is currently causing a SMTPRecipientsRefused error within the python application. Can anyone point me to what to change in the configuration? Thanks postconf -n: alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 mydestination = mysite.com, localhost.com, , localhost, * myhostname = mysite.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes

    Read the article

  • How do I get long command lines to wrap to the next line?

    - by BrianH
    Edit It was my .bashrc file. I've copied the same profile from machine to machine, and I used special characters in my $PS1 that are somehow throwing it off. I'm now sticking with the standard bash variables for my $PS1. Thanks to @ændrük for the tip on the .bashrc! ...End Edit... Something I have noticed in Ubuntu for a long time that has been frustrating to me is when I am typing a command at the command line that gets longer (wider) than the terminal width, instead of wrapping to a new line, it goes back to column 1 on the same line and starts over-writing the beginning of my command line. (It doesn't actually overwrite the actual command, but visually, it is overwriting the text that was displayed). It's hard to explain without seeing it, but let's say my terminal was 20 characters wide (Mine is more like 120 characters - but for the sake of an example), and I want to echo the English alphabet. What I type is this: echo abcdefghijklmnopqrstuvwxyz But what my terminal looks like before I hit the key is: pqrstuvwxyzghijklmno When I hit enter, it echos abcdefghijklmnopqrstuvwxyz so I know the command was received properly. It just wrapped my typing after the "o" and started over on the same line. What I would expect to happen, if I typed this command in on a terminal that was only 20 characters wide would be this: echo abcdefghijklmno pqrstuvwxyz Background: I am using bash as my shell, and I have this line in my ~/.bashrc: set -o vi to be able to navigate the command line with VI commands. I am currently using Ubuntu 10.10 server, and connecting to the server with Putty. In any other environment I have worked in, if I type a long command line, it will add a new line underneath the line I am working on when my command gets longer than the terminal width and when I keep typing I can see my command on 2 different lines. But for as long as I can remember using Ubuntu, my long commands only occupy 1 line. This also happens when I am going back to previous commands in the history (I hit Esc, then 'K' to go back to previous commands) - when I get to a previous command that was longer than the terminal width, the command line gets mangled and I cannot tell where I am at in the command. The only work-around I have found to see the entire long command is to hit "Esc-V", which opens up the current command in a VI editor. I don't think I have anything odd in my .bashrc file. I commented out the "set -o vi" line, and I still had the problem. I downloaded a fresh copy of Putty and didn't make any changes to the configuration - I just typed in my host name to connect, and I still have the problem, so I don't think it's anything with Putty (unless I need to make some config changes) Has anyone else had this problem, and can anyone think of how to fix it? Thanks in advance! Brian

    Read the article

  • Test/Dummy SMTP server for Windows

    - by geoaxis
    I would like to install a Test/Dummy SMTP server on a Windows 2008 server (virtual box). I just want to test my web application on the machine it self so I don't need the mails to go out on the internet, but just to be written to disk (so that I can verify that the mail function was indeed called and the correct data was handed over to SMTP) Can you recommend some tool. I guess starting your own SMTP server in python is an option. I am looking for a simple (ready to use) solution, targeted for tests systems. I will need to integrate it to automated tests (Selenium) at a later stage. Thanks

    Read the article

< Previous Page | 636 637 638 639 640 641 642 643 644 645 646 647  | Next Page >