Search Results

Search found 28559 results on 1143 pages for 'upgrade issue'.

Page 858/1143 | < Previous Page | 854 855 856 857 858 859 860 861 862 863 864 865  | Next Page >

  • Shrink NTFS Windows 7 Partition with GParted

    - by user15961
    I am running a dual-boot system with Windows 7 and Ubuntu 10.10. Initially I allocated about 20GB for my Ubuntu partition; however, I quickly ran out of that space and am now looking to expand my partition. Currently my NTFS partition (450GB) has about 130GB of free space. I tried using GParted to shrink the partition but encountered the following error. I booted into windows so I could run chkdsk but the countdown freezes at 1 upon reboot. I tried multiple methods to resolve that issue but nothing seems to work. Finally I gave up, and now I just want to know what is the best way for me to force GParted to shrink the partition regardless of the errors. I don't really have anything important and I don't mind risking the data. I just don't want to wipe the entire NTFS partition because I don't have a Windows install CD and might require Windows later on for some programs. I tried using sudo ntfsresize but that spews out the same error as GParted... Any ideas? Check and repair file system (ntfs) on /dev/sda2 00:00:09 ( ERROR ) calibrate /dev/sda2 00:00:00 ( SUCCESS ) path: /dev/sda2 start: 36944325 end: 976771119 size: 939826795 (448.14 GiB) check file system on /dev/sda2 for errors and (if possible) fix them 00:00:09 ( ERROR ) ntfsresize -P -i -f -v /dev/sda2 ntfsresize v2.0.0 (libntfs 10:0:0) Device name : /dev/sda2 NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 481191318016 bytes (481192 MB) Current device size: 481191319040 bytes (481192 MB) Checking for bad sectors ... Checking filesystem consistency ... Cluster 63468 is referenced multiple times! Cluster 63469 is referenced multiple times! Cluster 63465 is referenced multiple times! Cluster 63466 is referenced multiple times! Cluster 63467 is referenced multiple times! Cluster 165621 is referenced multiple times! Cluster 165622 is referenced multiple times! Cluster 165623 is referenced multiple times! Cluster 165624 is referenced multiple times! ERROR: Filesystem check failed! ERROR: 9 clusters are referenced multiply times. NTFS is inconsistent. Run chkdsk /f on Windows then reboot it TWICE! The usage of the /f parameter is very IMPORTANT! No modification was and will be made to NTFS by this software until it gets repaired.

    Read the article

  • Failed to start up after upgrading software

    - by Landy
    I asked this question in SuperUser one hour ago, then I know this community so I moved the question here... I've been running Ubuntu 10.10 in a physical x86-64 machine. Today Update Manager reminded me that there are some updates to install and I confirmed the action. I should had read the update list but I didn't. I can only remember there is an update about cups. After the upgrading, Update Manager requires a restart and I confirmed too. But after the restart, the computer can't start up. There are errors in the console. Begin: Running /scripts/init-premount ... done. Begin: Mounting root file system ... Begin: Running /scripts/local-top ... done. [xxx]usb 1-8: new high speed USB device using ehci_hcd and address 3 [xxx]usb 2-1: new full speed USB device using ohci_hcd and address 2 [xxx]hub 2-1:1.0: USB hub found [xxx]hub 2-1:1.0: 4 ports detected [xxx]usb 2-1.1: new low speed USB device using ohci_hcd and address 3 Gave up waiting for root device. Common probles: - Boot args (cat /proc/cmdline) - Check rootdelay=(did the system wait long enough) - Check root= (did the system wait for the right device?) - Missing modules (cat /proc/modules; ls /dev) FATAL: Could not load /lib/modules/2.6.35-22-generic/modules.dep: No such file or directory FATAL: Could not load /lib/modules/2.6.35-22-generic/modules.dep: No such file or directory ALERT! /dev/sda1 does not exist. Dropping to a shell! BusyBox v1.15.3 (Ubuntu 1:1.15.3-1ubuntu5) built-in shell(ash) Enter 'help' for a list of built-in commands. (initramfs)[cursor is here] At the moment, I can't input anything in the console. The keyboard doesn't work at all. What's wrong? How can I check boot args or "root=" as suggested? How can I fix this issue? Thanks. =============== PS1: the /dev/sda1 is type ext4 (rw,nosuid,nodev) PS2: the /dev/sda1 can be mounted and accessed successfully under SUSE 11 SP1 x64. PS3: From this link, I think the keyboard doesn't work because the USB driver is not loaded at that time.

    Read the article

  • best way to "introduce" OOP/OOD to team of experienced C++ engineers

    - by DXM
    I am looking for an efficient way, that also doesn't come off as an insult, to introduce OOP concepts to existing team members? My teammates are not new to OO languages. We've been doing C++/C# for a long time so technology itself is familiar. However, I look around and without major infusion of effort (mostly in the form of code reviews), it seems what we are producing is C code that happens to be inside classes. There's almost no use of single responsibility principle, abstractions or attempts to minimize coupling, just to name a few. I've seen classes that don't have a constructor but get memset to 0 every time they are instantiated. But every time I bring up OOP, everyone always nods and makes it seem like they know exactly what I'm talking about. Knowing the concepts is good, but we (some more than others) seem to have very hard time applying them when it comes to delivering actual work. Code reviews have been very helpful but the problem with code reviews is that they only occur after the fact so to some it seems we end up rewriting (it's mostly refactoring, but still takes lots of time) code that was just written. Also code reviews only give feedback to an individual engineer, not the entire team. I am toying with the idea of doing a presentation (or a series) and try to bring up OOP again along with some examples of existing code that could've been written better and could be refactored. I could use some really old projects that no one owns anymore so at least that part shouldn't be a sensitive issue. However, will this work? As I said most people have done C++ for a long time so my guess is that a) they'll sit there thinking why I'm telling them stuff they already know or b) they might actually take it as an insult because I'm telling them they don't know how to do the job they've been doing for years if not decades. Is there another approach which would reach broader audience than a code review would, but at the same time wouldn't feel like a punishment lecture? I'm not a fresh kid out of college who has utopian ideals of perfectly designed code and I don't expect that from anyone. The reason I'm writing this is because I just did a review of a person who actually had decent high-level design on paper. However if you picture classes: A - B - C - D, in the code B, C and D all implement almost the same public interface and B/C have one liner functions so that top-most class A is doing absolutely all the work (down to memory management, string parsing, setup negotiations...) primarily in 4 mongo methods and, for all intents and purposes, calls almost directly into D. Update: I'm a tech lead(6 months in this role) and do have full support of the group manager. We are working on a very mature product and maintenance costs are definitely letting themselves be known.

    Read the article

  • BIP 10.1.3.4.x June 2010 Update Available

    - by Tim Dexter
    A new patchset for 10.1.3.4.0 and 10.1.3.4.1 is available on Metalink. some notes: The patch number is 9791839. This patchset includes 28 new bug fixes since the last patchset release on March 31. This is a culmulative update that includes all the fixes and enhancements from previous updates. The patch will supercede the other two updates. Install instructions are in the readme inside the patch There is also a new BIP client patch available, 9821068. No new template building features to my knowledge but there is an update to the template viewer to allow you to test and debug you siny new Excel templates. Server 8529759XMLP_TEMPLATE_DESIGNER CANNOT SAVE / UPLOAD TEMPLATE 8566455 BI PUBLISHER SCHEDULER DOES NOT START WITH JNDI DATA SOURCE 9295667RESPONSE OF GETSCHEDULEDREPORTINFO RETURNS STATUS AS 'UNKNOWN' INSTEAD OF 'SCHED 9542413 UNABLE TO CREATE A NEW TEMPLATE FROM UI 9546137 EXCEL ANALYZER TEMPLATE FAILS FOR A STRUCTURED XML WHEN IT IS UPLOADED 9556338 SIEBEL - BIP PARAMETERS SORT ORDER 9560562 BI PUBLISHER CACHE DIRECTORY FILLING UP AND POINTING TO INVALID DIRECTORY 9646599 USER ROLE DEFINED AS PRIMARYGROUP IN ACTIVEDIRECTORY GROUP ARE NOT RECOGNIZED 9664768 ER: NEED TO BIND USER ATTRIBUTE VALUES DEFINED IN ACTIVEDIRECTORY IN DATA QUERY 9665075 BI PUBLISHER AFTER 9546699 NOTIFICATIONS FOR REPORTS FAIL 9669973 ER: NEED TO SUPPORT PRE-PROCESSING XML WITH XSL FOR EXCEL TEMPLATE 9704401 ER: NEED TO SUPPORT DEFAULT GROUP FOR ALL USERS IN LDAP/AD SECURITY 9711899 SEARCH PARAMETER IS NOT VISIBLE WHEN SCHEDULE A REPORT 9753736 SOME ROLES FROM ACTIVEDIRECTORY ARE NOT LISTED IN ADMIN ROLE-FOLDER MAPPING 9771354 MULTIPLE PARAMETERS IN 10.1.3.4.1 DATA TEMPLATE ACT ACT DIFFERENTLY FROM 10.1.3. 9772982 "REFRESH OTHER PARAMETERS ON CHANGE" DOESN'T WORK PROPERLY Core  8599646 ER:EXTRA SPACE ADDED BELOW IMAGE IN A TABLE CELL OF TEMPLATE IN FIREFOX 9377593 SOME ROWS HEIGHT IN HTML/EXCEL OUTPUT ARE TOO BIG IN BI PUBLISHER 9487030 NAVIGATION TREE REPEATING TWICE IN PDF DCCUMENT CREATED BY BI PUBLISHER 9509432 PERFORMANCE ISSUE WHEN USING PDF TEMPLATE 9534424 PS: DOCUMENT-REPEAT-FULLPATH-ELEMENTNAME SHOULDNT USE DOT "." AS PATH SEPARATOR 9553360 FORMPROCESSOR CANNOT PARSE SOME PDF TEMPLATES 9554959 TEXT IN AUTOSHAPE IS NOT PROPERLY CUT OFF FOR LINE WRAPPING 9569417 AFTER APPLYING PATCH 9509432 PDF TEMPLATES WITH DBDRV PRODUCE NO OUTPUT 9571670 ER: EXCEL TEMPLATE TO SUPPORT XSLT LOGIC AND XSL CUSTOM EXTENTIONS 9589809 XSL:CALL-TEMPLATE IS MISSING IN GENERATED XSL FILE 9605920 BOOKMARK TESTCASE FAILED DUE TO ER9283933 9689634 PRINT FLOW CHART USING ACROSS 3 DOWN 0 GIVES EXTRA BLANK PAGES You might have noticed some fixes and ehancements to the Excel templates so I can get back on those now. There is a part two to the Mapviewer BIP Mashup coming ... just need aanother 4 hours in the day to squeeze it in.

    Read the article

  • Base Pages and Interfaces for ASP.NET Pages

    - by geekrutherford
    For quite a while I have been using the concept of base pages when developing pages in ASP.NET applications. It is a wonderful method for exposing common functions to all of your applications pages and also overriding certain events for various purposes (i.e. dynamic themes).  Recently I found out a new developer will be joining my team. This prompted me to review the applications code for readability and ease of maintenance. I began adding comments through out the code behind for all pages within the application. While doing so I noted that I had used common method names for such things as loading data, configuring controls, applying filters, etc.   Bringing a new developer on board, I wanted to make the transition as seamless as possible while also ensuring they follow existing coding practices we already have in place. While I could have created virtual methods for the common page methods allowing them to overridden, what I really needed was a way to ensure the new developer implemented the same methods for each and every page. Thus I created an interface to force the issue.   Now, every page not only inherits the base page class but also implements an interface. This provides every page not only common functions and overridden page events but also imposes rules for implementing certain common methods :-)   Interface   public interface BasePageInterface { /// Configures page based on users security permissions. void CheckPermissions(); /// Configures Filter Form control for current page.  /// Ensure you have set the FilteredGrid and PageAjaxManager properties of the FilterForm control in PageLoad!!!  void ConfigureFilters(); /// Sets event handlers and default settings for controls on the current page. void ConfigureControls(); /// Exports data bound to grid in selected format. void ExportGridData(ExportFormat fmt); /// Loads data and binds to grid. /// Columns are turned on/off in grid depending on tab selected and users permissions.  void LoadData(); }   Page code-behind class definition:   public partial class MyPage : BasePage, BasePageInterface Note, you could not use an abstract class to accomplish this considering C# does not allow for multiple inheritance.  Nor could the base page class be abstract since it needs to inherit from the System.Web.UI.Page class in order to override page events.

    Read the article

  • SmartView 11.1.2.2.103 - Support for MS Office 64 added

    - by THE
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 (thanks to Nancy, who shared this with me)  New for Smart View v11.1.2.2.103, Patch 14362638,   Microsoft Office 64-bit is now supported:  Information for 64-Bit Microsoft Office Installations: In this release, Smart View supports the 64-bit version on Microsoft Office. If you use 64-bit Office, please note the following: Oracle provides separate Smart View installation files for 64-bit and 32-bit Office systems. . smartview-x64.exe is the file for 64-bit Office installations. smartview.exe is the file for 32-bit Office installations. The 64-bit version of Smart View pertains only to the 64-bit version of Microsoft Office and not to the version of the operating system. Customers with 64-bit operating systems and the 32-bit version of Microsoft Office should install the 32-bit version of Smart View. You cannot install the 64-bit version of Smart View from EPM Workspace (13530466). Although Planning Offline is supported for 64-bit operating systems, it is not supported for 64-bit Smart View installations. If you use Planning Offline with Smart View, you must use the 32-bit version of Smart View and the 32-bit version of Microsoft Office. In 64-bit versions of Excel 2010 SP1, the presence of Smart View functions may cause Excel to terminate abruptly and may prevent Copy Data Point and Paste Data Point functions from working. This is a Microsoft issue, and a service request has been filed with Microsoft. Workaround: Until the Microsoft fix, use the 32-bit version of Smart View. (13606492) The Smart View function migration utility is not supported on 64-bit Office. (14342207) /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

  • What do you do when practical problems get in the way of practical goals?

    - by P.Brian.Mackey
    UPDATE Source control is good to use. Sometimes, real world issues make it impractical to use. For example: If the team is not used to using source control, training problems can arise If a team member directly modifies code on the server, various issues can arise. Merge problems, lack of history, etc Let's say there's a project that is way out of sync. The physical files on the server differ in unknown ways over ~100 files. Merging would take not only a great knowledge of the project, but is also well beyond the ability to complete in the given time. Other projects are falling out of sync. Developers continue to have a distrust of source control and therefore compound the issue by not using source control. Developers argue that using source control is wasteful because merging is error prone and difficult. This is a difficult point to argue, because when source control is being so badly mis-used and source control continually bypassed, it is error prone indeed. Therefore, the evidence "speaks for itself" in their view. Developers argue that directly modifying source control saves time. This is also difficult to argue. Because the merge required to synchronize the code to start with is time consuming, across ~10 projects. Permanent files are often stored in the same directory as the web project. So publishing (full publish) erases these files that are not in source control. This also drives distrust for source control. Because "publishing breaks the project". Fixing this (moving stored files out of the solution subfolders) takes a great deal of time and debugging as these locations are not set in web.config and often exist across multiple code points. So, the culture persists itself. Bad practice begets more bad practice. Bad solutions drive new hacks to "fix" much deeper, much more time consuming problems. Servers, hard drive space are extremly difficult to come by. Yet, user expectations are rising. What can be done in this situation?

    Read the article

  • Marshalling C# Structs into DX11 cbuffers

    - by Craig
    I'm having some issues with the packing of my structure in C# and passing them through to cbuffers I have registered in HLSL. When I pack my struct in one manner the information seems to be able to pass to the shader: [StructLayout(LayoutKind.Explicit, Size = 16)] internal struct TestStruct { [FieldOffset(0)] public Vector3 mEyePosition; [FieldOffset(12)] public int type; } This works perfectly when used against this HLSL fragment: cbuffer PerFrame : register(b0) { Vector3 eyePos; int type; } float3 GetColour() { float3 returnColour = float(0.0f, 0.0f, 0.0f); switch(type) { case 0: returnColour = float3(1.0f, 0.0f, 0.0f); break; case 1: returnColour = float3(0.0f, 1.0f, 0.0f); break; case 2: returnColour = float3(0.0f, 0.0f, 1.0f); break; } return returnColour; } However, when I use the following structure definitions... // Note this is 16 because HLSL packs in 4 float 'chunks'. // It is also simplified, but still demonstrates the problem. [StructLayout(Layout.Explicit, Size = 16)] internal struct InternalTestStruct { [FieldOffset(0)] public int type; } [StructLayout(LayoutKind.Explicit, Size = 32)] internal struct TestStruct { [FieldOffset(0)] public Vector3 mEyePosition; //Missing 4 bytes here for correct packing. [FieldOffset(16)] public InternalTestStruct mInternal; } ... the following HLSL fragment no longer works. struct InternalType { int type; } cbuffer PerFrame : register(b0) { Vector3 eyePos; InternalType internalStruct; } float3 GetColour() { float3 returnColour = float(0.0f, 0.0f, 0.0f); switch(internaltype.type) { case 0: returnColour = float3(1.0f, 0.0f, 0.0f); break; case 1: returnColour = float3(0.0f, 1.0f, 0.0f); break; case 2: returnColour = float3(0.0f, 0.0f, 1.0f); break; } return returnColour; } Is there a problem with the way I am packing the struct, or is it another issue? To re-iterate: I can pass a struct in a cbuffer so long as it does not contain a nested struct.

    Read the article

  • Why would you dual-run an app on Azure and AWS?

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/11/10/why-would-you-dual-run-an-app-on-azure-and-aws.aspxI had this question from a viewer of my Pluralsight course, Implementing the Reactive Manifesto with Azure and AWS, and thought I’d publish the response. So why would you dual-run your cloud app by hosting it on Azure and AWS? Sounds like a lot of extra development and management overhead. Well the most compelling reasons are reliability and portability. In 2012 I was working for a client who was making a big investment in the cloud, and at the end of the year we published their first external API for business partners. It was hosted in Azure and used some really nice features to route back into existing on-premise services. We were able to publish a clean, simple API to partners, and hide away the underlying complexity of the internal services while still leveraging them to do all the work. Two days after we went live, we were hit by the Azure SSL certificate expiry outage, and our API was unavailable for the best part of 3 days. Fortunately we had planned a gradual roll-out to partners, so the impact was minimal, but we’d been intending to ramp up quickly, and if the outage had happened a week or two later we would have been in a very bad place. Not least because our app could only run on Azure, we couldn’t package it up for another service without going back and reworking the code. More recently AWS had an issue with a networking device in one of their data centres which caused an outage that took the best part of a day to resolve. In both scenarios the SLAs are worthless, as you’ll get back a small percentage of your cloud expenditure, which is going to be negligible compared to your costs in dealing with the outage. And if your app is built specifically for AWS or Azure then if there’s an extended outage you can’t just deploy it onto a new set of kit from a different supplier. And the chances are pretty good there will be another extended outage, both for Microsoft and for Amazon. But the chances are small that it will happen to both at the same time. So my basic guidance has been: ignore the SLAs, go for better uptime by using two clouds. As soon as you need to scale beyond a single instance, start by scaling out to another cloud. Then scale out to different data centres in both clouds. Then you’ve got dual-cloud, quadruple-datacentre redundancy, so any more scaling you need can be left to the clouds to auto-scale themselves. By running in both clouds, you’ve made your app portable, so in the highly unlikely event that both AWS and Azure go down in multiple regions, you’ll have a deployment package which will let you spin up a new stack on yet another cloud, without having to rework your solution.

    Read the article

  • Revenue Recognition: Performance Obligation Pass a Hurdle

    - by Theresa Hickman
    I met up with Seamus Moran, our resident accounting expert, to get his thoughts about the latest happenings with IFRS. Last week, on March 13,  the comment period on the FASB and IASB exposure draft “Revenue From Contracts with Customers” closed.  FASB and IASB have just over 20 comment letters – a very small number.  The implication is that that the exposure draft does reflect general acceptance, and therefore will be published as both a US and Internationally Generally Accepted Accounting Standard. At a recent conference call, FASB and IASB expected to complete their report to both Boards on the comments by early summer, complete their deliberation of the comments by the fall and draft the final standard text by late this year. It is assumed the concept of Performance Obligations would become US GAAP and IFRS in place of the existing standards.  They confirmed that all existing US GAAP and IFRS guidelines would be withdrawn, and that they were in dialogue with the SEC on withdrawing the SEC guidelines on the revenue issue as well.The open question is when will Performance Obligations become effective?  The Boards have said that they would like this Revenue Recognition standard and the the Lease Accounting standard to be effective at the same time because what isn’t either insurance, interest, or a lease is a revenue arrangement.  However, ascertaining what is generally acceptable in respect of Leases is proving a little elusive, and the Boards have recently diverged a little on the P&L side of the accounting (although both are in agreement that there will be no off-balance sheet leases).  It is therefore likely that the Lease standard might be delayed. One wonders if the Boards will  define effectivity of the Revenue standard independently of the Lease standard or if they will stick with their resolve to make them co-effective.  The Boards have also said that neither standard will be effective before June 2015.Here is the gist of the new Revenue Recognition principle and the steps to apply it:Recognize revenue to depict the transfer of goods or services in an amount that reflects the consideration expected to be entitled in exchange for those goods and services.Steps to apply the core principles: Identify the contract with the customer Identify the separate performance obligations Determine the transaction price Allocate the the transaction price Recognize Revenue when a performance obligation is satisfied  

    Read the article

  • Why is Double.Parse so slow?

    - by alexhildyard
    I was recently investigating a bottleneck in one of my applications, which read a CSV file from disk using a TextReader a line at a time, split the tokens, called Double.Parse on each one, then shunted the results into an object list. I was surprised to find it was actually the Double.Parse which seemed to be taking up most of the time.Googling turned up this, which is a little unfocused in places but throws out some excellent ideas:It makes more sense to work with binary format directly, rather than coerce strings into doublesThere is a significant performance improvement in composing doubles directly from the byte stream via long intermediariesString.Split is inefficient on fixed length recordsIn fact it turned out that my problem was more insidious and also more mundane -- a simple case of bad data in, bad data out. Since I had been serialising my Doubles as strings, when I inadvertently divided by zero and produced a "NaN", this of course was serialised as well without error. And because I was reading in using Double.Parse, these "NaN" fields were also (correctly) populating real Double objects without error. The issue is that Double.Parse("NaN") is incredibly slow. In fact, it is of the order of 2000x slower than parsing a valid double. For example, the code below gave me results of 357ms to parse 1000 NaNs, versus 15ms to parse 100,000 valid doubles.            const int invalid_iterations = 1000;            const int valid_iterations = invalid_iterations * 100;            const string invalid_string = "NaN";            const string valid_string = "3.14159265";            DateTime start = DateTime.Now;                        for (int i = 0; i < invalid_iterations; i++)            {                double invalid_double = Double.Parse(invalid_string);            }            Console.WriteLine(String.Format("{0} iterations of invalid double, time taken (ms): {1}",                invalid_iterations,                ((TimeSpan)DateTime.Now.Subtract(start)).Milliseconds            ));            start = DateTime.Now;            for (int i = 0; i < valid_iterations; i++)            {                double valid_double = Double.Parse(valid_string);            }            Console.WriteLine(String.Format("{0} iterations of valid double, time taken (ms): {1}",                valid_iterations,                ((TimeSpan)DateTime.Now.Subtract(start)).Milliseconds            )); I think the moral is to look at the context -- specifically the data -- as well as the code itself. Once I had corrected my data, the performance of Double.Parse was perfectly acceptable, and while clearly it could have been improved, it was now sufficient to my needs.

    Read the article

  • In a multidisciplicary team, how much should each member's skills overlap?

    - by spade78
    I've been working in embedded software development for this small startup and our team is pretty small: about 3-4 people. We're responsible for all engineering which involves an RF device controlled by an embedded microcontroller that connects to a PC host which runs some sort of data collection and analysis software. I have come to develop these two guidelines when I work with my colleagues: Define a clear separation of responsibilities and make sure each person's contribution to the final product doesn't overlap. Don't assume your colleagues know everything about their responsibilities. I assume there is some sort of technology that I will need to be competent at to properly interface with the work of my colleagues. The first point is pretty easy for us. I do firmware, one guy does the RF, another does the PC software, and the last does the DSP work. Nothing overlaps in terms of two people's work being mixed into the final product. For that to happen, one guy has to hand off work to another guy who will vet it and integrate it himself. The second point is the heart of my question. I've learned the hard way not to trust the knowledge of my colleagues absolutley no matter how many years experience they claim to have. At least not until they've demonstrated it to me a couple of times. So given that whenever I develop a piece of firmware, if it interfaces with some technology that I don't know then I'll try to learn it and develop a piece of test code that helps me understand what they're doing. That way if my piece of the product comes into conflict with another piece then I have some knowledge about possible causes. For example, the PC guy has started implementing his GUI's in .NET WPF (C#) and using LibUSBdotNET for USB access. So I've been learning C# and the .NET USB library that he uses and I build a little console app to help me understand how that USB library works. Now all this takes extra time and energy but I feel it's justified as it gives me a foothold to confront integration problems. Also I like learning this new stuff so I don't mind. On the other hand I can see how this can turn into a time synch for work that won't make it into the final product and may never turn into a problem. So how much experience/skills overlap do you expect in your teammates relative to your own skills? Does this issue go away as the teams get bigger and more diverse?

    Read the article

  • What to do when opensource project starts to tear apart? (or a manager tries to write code and than shouts at the team)

    - by Kabumbus
    Imagine there is an open source cross-platform project on Google code. It has lots of revisions (1000). It concentrates in itself lots technological stuff - rare stuff - it mixes top tech. It contains server, and more than one client. The project was created by a well-connected team of developers (friends) and a manager that was sponsoring project at its start up during its first few months (project now is more than a year old-sponsoring oss project is a big good deal- also gave the idea of project to developers). The project was growing in complexity and effort reqiered to continue development. Once upon a time a manager - team leader started trying to write code (he was a programmer in some other projects - not the best, but he felt like he was one). He started because one of the developers suggested an idea at the team meeting and he felt he just needed to do it on his own. He failed, and he told the dev team about it. The dev team did what he failed to do in a few days. After that, the manager feels that team codes with out him perfectly and gets the job done in short time. He felt sorry and lost and he started to crash like an old bad PC. Firstly, he started to scream (in forms of messages not in voice) he tried to tell developers that what they were doing was a bad, not-needed thing - developers kindly told him that his "beginnings" were not compilable while dev team product worked as needed. He told the developers that all work they do should be firstly discussed with him. Here is the part where we need to mention that all team members are "project owners" and logically have equal rights. The team leader suggested to the developers these options: change their dev process to go through him, or be moved from project owners to contributers. So what are our options as developers? What arguments we can provide to the team leader/manager for him to calm down? Is it possible to save the project or is it better to fork out now? An important issue is that lately we had no active ticket system, and I personally think that this was the reason the mess appeared. So... any ideas?

    Read the article

  • Ubuntu 10.10 not recognizing external hard drive

    - by sr71
    Installed Ubuntu 10.10 on a hard drive by itself (no windows). All updates are up to date. Everything`works fine except when I plug in a 320 gig Toshiba external hard drive....it is not recognized. When I plug in an 8 gig flash drive it is recognized no problem. What do I mean by "not recognized"? I mean: how do you know it was not recognized. You can check the command "cat /proc/partitions" in a terminal with and without the hdd attached. If you see some difference, it's ok. If the difference is something like "sda1" (sd+letter+number) then you have partition on it, maybe it can't be handled in Ubuntu (no filesystem on it or so). If there is only "sda" in the difference for example (sd+letter, but no number after it) then the drive itself is detected, just no partition is created on it. Also you can check out the messages of the kernel, with the "dmesg" command in the terminal. If there is no disk/partition/anything in /proc/partitions, there can be an USB level problem, you can issue command "lsusb" as it was suggested before my answer. It's really matter of what do you mean about "not recognized". @Marco Ceppi @Oli @Pitto By "not recognized" I meant that when I plug in a 8 gig flash drive an icon immediately appears on the desktop imdicating that there is a flash drive plugged in and I can click on it and view the files on the flash drive. It also shows up on the "Nautilus" default file manager. When I plug in the 320 gig Toshiba external hard drive, I get no indication on the desktop or the file manager (thus "not recogized"). When I run the command "cat/proc/partitions/" I get an error message Could not open location 'file:///home/bob/cat/proc/partitions' with or without the external hard drive installed. With the "dmeg" command I get about 10 pages of info with no mention of disk/partition/anything in /proc/partitions. When I run "lsub" command I get the following: bob@bob-desktop:~$ lsusb Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 003: ID 04b3:3003 IBM Corp. Rapid Access III Keyboard Bus 003 Device 002: ID 04b3:3004 IBM Corp. Media Access Pro Keyboard Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub bob@bob-desktop:~$

    Read the article

  • Why does creating dynamic bodies in JBox2D freeze my app?

    - by Amplify91
    My game hangs/freezes when I create dynamic bullet objects with Box2D and I don't know why. I am making a game where the main character can shoot bullets by the user tapping on the screen. Each touch event spawns a new FireProjectileEvent that is handled properly by an event queue. So I know my problem is not trying to create a new body while the box2d world is locked. My bullets are then created and managed by an object pool class like this: public Projectile getProjectile(){ for(int i=0;i<mProjectiles.size();i++){ if(!mProjectiles.get(i).isActive){ return mProjectiles.get(i); } } return mSpriteFactory.createProjectile(); } mSpriteFactory.createProjectile() leads to the physics component of the Projectile class creating its box2d body. I have narrowed the issue down to this method and it looks like this: public void create(World world, float x, float y, Vec2 vertices[], boolean dynamic){ BodyDef bodyDef = new BodyDef(); if(dynamic){ bodyDef.type = BodyType.DYNAMIC; }else{ bodyDef.type = BodyType.STATIC; } bodyDef.position.set(x, y); mBody = world.createBody(bodyDef); PolygonShape dynamicBox = new PolygonShape(); dynamicBox.set(vertices, vertices.length); FixtureDef fixtureDef = new FixtureDef(); fixtureDef.shape = dynamicBox; fixtureDef.density = 1.0f; fixtureDef.friction = 0.0f; mBody.createFixture(fixtureDef); mBody.setFixedRotation(true); } If the dynamic parameter is set to true my game freezes before crashing, but if it is false, it will create a projectile exactly how I want it just doesn't function properly (because a projectile is not a static object). Why does my program fail when I try to create a dynamic object at runtime but not when I create a static one? I have other dynamic objects (like my main character) that work fine. Any help would be greatly appreciated. This is a screenshot of a method profile I did: Especially notable is number 8. I'm just still unsure what I'm doing wrong. Other notes: I am using JBox2D 2.1.2.2. (Upgraded from 2.1.2.1 to try to fix this problem) When the application freezes, if I hit the back button, it appears to move my game backwards by one update tick. Very strange.

    Read the article

  • LWJGL - Mixing 2D and 3D

    - by nathan
    I'm trying to mix 2D and 3D using LWJGL. I have wrote 2D little method that allow me to easily switch between 2D and 3D. protected static void make2D() { glEnable(GL_BLEND); GL11.glMatrixMode(GL11.GL_PROJECTION); GL11.glLoadIdentity(); glOrtho(0.0f, SCREEN_WIDTH, SCREEN_HEIGHT, 0.0f, 0.0f, 1.0f); GL11.glMatrixMode(GL11.GL_MODELVIEW); GL11.glLoadIdentity(); } protected static void make3D() { glDisable(GL_BLEND); GL11.glMatrixMode(GL11.GL_PROJECTION); GL11.glLoadIdentity(); // Reset The Projection Matrix GLU.gluPerspective(45.0f, ((float) SCREEN_WIDTH / (float) SCREEN_HEIGHT), 0.1f, 100.0f); // Calculate The Aspect Ratio Of The Window GL11.glMatrixMode(GL11.GL_MODELVIEW); glLoadIdentity(); } The in my rendering code i would do something like: make2D(); //draw 2D stuffs here make3D(); //draw 3D stuffs here What i'm trying to do is to draw a 3D shape (in my case a quad) and i 2D image. I found this example and i took the code from TextureLoader, Texture and Sprite to load and render a 2D image. Here is how i load the image. TextureLoader loader = new TextureLoader(); Sprite s = new Sprite(loader, "player.png") And how i render it: make2D(); s.draw(0, 0); It works great. Here is how i render my quad: glTranslatef(0.0f, 0.0f, 30.0f); glScalef(12.0f, 9.0f, 1.0f); DrawUtils.drawQuad(); Once again, no problem, the quad is properly rendered. DrawUtils is a simple class i wrote containing utility method to draw primitives shapes. Now my problem is when i want to mix both of the above, loading/rendering the 2D image, rendering the quad. When i try to load my 2D image with the following: s = new Sprite(loader, "player.png); My quad is not rendered anymore (i'm not even trying to render the 2D image at this point). Only the fact of creating the texture create the issue. After looking a bit at the code of Sprite and TextureLoader i found that the problem appears after the call of the glTexImage2d. In the TextureLoader class: glTexImage2D(target, 0, dstPixelFormat, get2Fold(bufferedImage.getWidth()), get2Fold(bufferedImage.getHeight()), 0, srcPixelFormat, GL_UNSIGNED_BYTE, textureBuffer); Commenting this like make the problem disappear. My question is then why? Is there anything special to do after calling this function to do 3D? Does this function alter the render part, the projection matrix?

    Read the article

  • Best way to remote restart Ubuntu from Windows machine

    - by robsoft
    Background: I'm looking to put a series of Ubuntu machines into retail locations, they're being used as dumb kiosks to show a series of slides onto large LCD panel TV screens. Once installed, they won't have a keyboard or mouse connected but will have a fixed IP on the local network. Everything is configured to auto-start, no automatic updates, no power saving etc - I think we're pretty-much good to go apart from one thing. I need the retail staff to be able to restart the boxes if a problem arises. We have VNC running (now that we've turned off desktop enhancements!) so that we can remotely get into the machines if we need to, but that's not something we would allow the retail staff to do. The machines are going to be physically 'out of the way' (probably in the ceiling space) so the power button is not easily accessible!. I'd like to have some means of allowing the retail staff to restart the Ubuntu machine, from the desktop of one of their Windows terminals. I don't really want to give them some kind of raw terminal access (the command line will frighten them!) and I don't want them to use VNC (as stated above). Ideally there would be an icon on the Windows desktop, they double-click it, reply to a simple 'are you sure?' prompt, and then the Ubuntu box is told to restart. The Windows side of that won't be a problem, we can write something using Delphi, Python & Qt4, whatever - it's the Ubuntu side of it I'm stuck with. Out of sight/view, could I have a Windows program open a terminal across the network and tell Ubuntu to restart? Is this what SSH could be used for (I have never set that kind of thing up). The Windows programming side isn't really an issue, it's just that I'm a total Ubuntu noob and don't know where to start from the platform point of view. The other thing we considered is also having the machine automatically restart itself at a set time each day (obviously out of store hours!). To me, that seems a bit unnecessary (though forcing a restart once a week/month might be worthwhile). Any thoughts or suggestions? Being able to restart the box on demand across the network is my prime requirement.

    Read the article

  • Why should I not do a masters degree

    - by aurel
    I have left university on July 2010 where I studied web design (as we all know you learn more by your self but that’s not the issue at the moment). Since then I have not managed to find a job (apart from a one month work experience), from the way things are going, and by taking into account the fact that all my university friends are in the same situation, I don’t think that I am going to find a job soon (within the industry) Now as we all do, even though I don’t have a job, I am still working on personal projects and try to keep up to date (I don’t need a job or uni to do this) – but I am thinking, because there is not work available, would it be worth going back to uni for a master degree? I know I don’t need it and I know that is unlikely that I will learn anything important, as I believe in self learning, and in most cases it is a lot more effective (but I have to say I don’t mind going back to school) The only reason I am thinking of doing the master is, (and this is where I need your help): If it takes me a year to get a job, then on the interview, would the employer think “what the hell did this guy do since he left university” – now if I go to university that would solve this problem. Or I’m I making up a problem that does not exist Plus, I know that employers need examples of sites that I have been working on, at the moment I only have 3 (as when working on personal projects, where their is not time limit I tend drag things in order to get them perfect, and they never get perfect) – so by going back to uni, then this problem maybe solved I said all this as I have read a lot about the fact that you don’t need to have a masters degree to work on web design market (and I totally agree) but considering my concern, the question is should I do a masters course to avoid just spending hours in my room working and learning in my own (but that it would be hard to convince employers that I was really learning in my room) Maybe because I’m still young age 22 not that old anyway :), but I don’t have the “dream” of being rich, so if I were to tell the truth I don’t really care of the fact that I don’t have a job (at the moment), because regardless, I am working on what I love every day, but I know that in the future when I will need the job I may find it harder to get one, if I neglect doing so now Every time I ask a question that I’m not sure about, I keep going on and on, but I really hope you get what I am trying to get across. By the way the course that I am looking at for a masters says that it would teach me how to do these: e-commerce e-government e-science e-learning I don’t know any of them, a part from e-commerce Thanks

    Read the article

  • How to handle this unfortunately non hypothetical situation with end-users?

    - by User Smith
    I work in a medium sized company but with a very small IT force. Last year (2011), I wrote an application that is very popular with a large group of end-users. We hit a deadline at the end of last year and some functionality (I will call funcA from now on) was not added into the application that was wanted at the very end. So, this application has been running in live/production since the end of 2011, I might add without issue. Yesterday, a whole group of end-users started complaining that funcA that was never in the application is no longer working. Our priority at this company is that if an application is broken it must be fixed first prior to prioritized projects. I have compared code and queries and there is no difference since 2011, which is proofA. I then was able to get one of the end-users to admit that it never worked proofB, but since then that end-user has went back and said that it was working previously......I believe the horde of end-users has assimilated her. I have also reviewed my notes for this project which has requirements and daily updates regarding the project which specifically states, "funcA not achieved due to time constraints", proofC. I have spoken with many of them and I can see where they could be confused as they are very far from a programming background, but I also know they are intelligent enough to act in a group in order to bypass project prioritization orders in order to get functionality that they want to make their job easier. The worst part is is that now group think is setting in and my boss and the head of IT is actually starting to believe them, even though there is no code or query changes. As far as reviewing the state of the logic it is very cut and dry to the point of if 1 = 1, funcA will not work. So, this is the end of the description of my scenario, but I am trying not to get severally dinged on my performance metrics due to this which would essentially have me moved to fixing a production problem that doesn't exist that will probably take over 1 month. I am looking for direct answers to this question. This question is not for rants, polling, or discussions as this is not the format for StackExchange. Please don't downvote me too terribly it is pretty common on this specific site of stack, I am looking for honest answers to this situation and I couldn't find a forum more appropriate.

    Read the article

  • Bootloader Problems Grub Won't Load Windows 7

    - by user108805
    I sent this to [email protected], still no response thought I could get a faster solution here. I am running Windows 7 64-bit and Ubuntu 12.04 LTS on separate partitions. The message is sent is: Boot-Repair URL: http://paste.ubuntu.com/1365163/ Originally I was unable to access Ubuntu after a windows update (Ubuntu was installed using wubi). Rather than logging into Ubuntu from the Windows 7 Bootloader, it lead to the grub command prompt. No matter what I did here, it would not log me into linux. As a result I uninstalled Ubuntu from the Add/Remove Programs application in Windows 7. I then re-installed Ubuntu 12.04 LTS using a liveCD-USB. This time however, I created a partition. I then restarted and got the GRUB bootloader which loads Ubuntu 12.04 LTS with no problems, however when I select windows (listed as "Windows 7 (loader)"), it just refreshes the grub bootloader instead of loading Windows 7. I then used the Windows 7 repair disk to run bootrec/fixmbr and bootrec/fixboot. This led to no bootloader coming up when I started my computer. Instead I got a blank black screen with a flashing white cursor. I went on to do a bootrec/buildbcd and bootrec/scanos. These did nothing to change the situation. When I ran bootrec/scanos it said that no Windows 7 installations were present. After this I decided to reinstall WIndows 7 only for this to do nothing to change the situation. Afterwards I did a boot-repair in which I began to get the GRUB bootloader, which would load ubuntu 12.04 LTS, but still would not load Windows 7. I also did a sudo update-grub which recognized Windows 7 as being installed, but still didn't fix the issue of loading Windows 7. While running Ubuntu I have no problem accessing my WIndows 7 partition which is formatted as NTFS. It shows all the files and folders reflecting that the re-install did take place, and it also shows all of my old applications and folders in the Windows.old folder. I am completely stuck at this point and have no clue what I should do next. Any help you can offer me will be greatly appreciate. Thank You --gap

    Read the article

  • Headaches using distributed version control for traditional teams?

    - by J Cooper
    Though I use and like DVCS for my personal projects, and can totally see how it makes managing contributions to your project from others easier (e.g. your typical Github scenario), it seems like for a "traditional" team there could be some problems over the centralized approach employed by solutions like TFS, Perforce, etc. (By "traditional" I mean a team of developers in an office working on one project that no one person "owns", with potentially everyone touching the same code.) A couple of these problems I've foreseen on my own, but please chime in with other considerations. In a traditional system, when you try to check your change in to the server, if someone else has previously checked in a conflicting change then you are forced to merge before you can check yours in. In the DVCS model, each developer checks in their changes locally and at some point pushes to some other repo. That repo then has a branch of that file that 2 people changed. It seems that now someone must be put in charge of dealing with that situation. A designated person on the team might not have sufficient knowledge of the entire codebase to be able to handle merging all conflicts. So now an extra step has been added where someone has to approach one of those developers, tell him to pull and do the merge and then push again (or you have to build an infrastructure that automates that task). Furthermore, since DVCS tends to make working locally so convenient, it is probable that developers could accumulate a few changes in their local repos before pushing, making such conflicts more common and more complicated. Obviously if everyone on the team only works on different areas of the code, this isn't an issue. But I'm curious about the case where everyone is working on the same code. It seems like the centralized model forces conflicts to be dealt with quickly and frequently, minimizing the need to do large, painful merges or have anyone "police" the main repo. So for those of you who do use a DVCS with your team in your office, how do you handle such cases? Do you find your daily (or more likely, weekly) workflow affected negatively? Are there any other considerations I should be aware of before recommending a DVCS at my workplace?

    Read the article

  • Circle-Rectangle collision in a tile map game

    - by furiousd
    I am making a 2D tile map based putt-putt game. I have collision detection working between the ball and the walls of the map, although when the ball collides at the meeting point between 2 tiles I offset it by 0.5 so that it doesn't get stuck in the wall. This aint a huge issue though. if(y % 20 == 0) { y+=0.5; } if(x % 20 == 0) { x+=0.5; } Collisions work as follows Find the closest point between each tile and the center of the ball If distance(ball_x, ball_y, close_x, close_y) <= ball_radius and the closest point belongs to a solid object, collision has occured Invert X/Y speed according to side of object collided with The next thing I tried to do was implement floating blocks in the middle of the map for the ball to bounce off of. When a ball collides with a corner of the block, it gets stuck in it. So I changed my determineRebound() function to treat corners as if they were circles. Here's that functon: `i and j are indexes of the solid object in the 2d map array. x & y are centre point of ball.` void determineRebound(int _i, int _j) { if(y > _i*tile_w && y < _i*tile_w + tile_w) { //Not a corner xs*=-1; } else if(x > _j*tile_w && x < _j*tile_w + tile_w) { //Not a corner ys*=-1; } else { //Corner float nx = x - close_x; float ny = y - close_y; float len = sqrt(nx * nx + ny * ny); nx /= len; ny /= len; float projection = xs * nx + ys * ny; xs -= 2 * projection * nx; ys -= 2 * projection * ny; } } This is where things have gotten messy. Collisions with 'floating' corners work fine, but now when the ball collides near the meeting point of 2 tiles, it detects a corner collision and does not rebound as expected. I'm a bit in over my head at this point. I guess I'm wondering if I'm going about making this sort of game in the right way. Is a 2d tile map the way to go? If so, is there a problem with my collision logic and where am I going wrong? Any advice/feedback would be great.

    Read the article

  • determine if udp socket can be accessed via external client

    - by JohnMerlino
    I don't have access to company firewall server. but supposedly the port 1720 is open on my one ubuntu server. So I want to test it with netcat: sudo nc -ul 1720 The port is listening on the machine ITSELF: sudo netstat -tulpn | grep nc udp 0 0 0.0.0.0:1720 0.0.0.0:* 29477/nc The port is open and in use on the machine ITSELF: lsof -i -n -P | grep 1720 gateway 980 myuser 8u IPv4 187284576 0t0 UDP *:1720 Checked the firewall on current server: sudo ufw allow 1720/udp Skipping adding existing rule Skipping adding existing rule (v6) sudo ufw status verbose | grep 1720 1720/udp ALLOW IN Anywhere 1720/udp ALLOW IN Anywhere (v6) But I try echoing data to it from another computer (I replaced the x's with the real integers): echo "Some data to send" | nc xx.xxx.xx.xxx 1720 But it didn't write anything. So then I try with telnet from the other computer as well: telnet xx.xxx.xx.xxx 1720 Trying xx.xxx.xx.xxx... telnet: connect to address xx.xxx.xx.xxx: Operation timed out telnet: Unable to connect to remote host Although I don't think telnet works with udp sockets. I ran nmap from another computer within the same local network and this is what I got: sudo nmap -v -A -sU -p 1720 xx.xxx.xx.xx Starting Nmap 5.21 ( http://nmap.org ) at 2013-10-31 15:41 EDT NSE: Loaded 36 scripts for scanning. Initiating Ping Scan at 15:41 Scanning xx.xxx.xx.xx [4 ports] Completed Ping Scan at 15:41, 0.10s elapsed (1 total hosts) Initiating Parallel DNS resolution of 1 host. at 15:41 Completed Parallel DNS resolution of 1 host. at 15:41, 0.00s elapsed Initiating UDP Scan at 15:41 Scanning xtremek.com (xx.xxx.xx.xx) [1 port] Completed UDP Scan at 15:41, 0.07s elapsed (1 total ports) Initiating Service scan at 15:41 Initiating OS detection (try #1) against xtremek.com (xx.xxx.xx.xx) Retrying OS detection (try #2) against xtremek.com (xx.xxx.xx.xx) Initiating Traceroute at 15:41 Completed Traceroute at 15:41, 0.01s elapsed NSE: Script scanning xx.xxx.xx.xx. NSE: Script Scanning completed. Nmap scan report for xtremek.com (xx.xxx.xx.xx) Host is up (0.00013s latency). PORT STATE SERVICE VERSION 1720/udp closed unknown Too many fingerprints match this host to give specific OS details Network Distance: 1 hop TRACEROUTE (using port 1720/udp) HOP RTT ADDRESS 1 0.13 ms xtremek.com (xx.xxx.xx.xx) Read data files from: /usr/share/nmap OS and Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 2.04 seconds Raw packets sent: 27 (2128B) | Rcvd: 24 (2248B). The only thing I can think of is a firewall or vpn issue. Is there anything else I can check for before requesting that they look at the firewall server again?

    Read the article

  • Several New Hints

    - by Ondrej Brejla
    Hi all! Today we would like to introduce you some of our new experimental hints for NetBeans 7.2. They are called: Unused Use Statement and Immutable Variables. Unused Use Statement This hint is quite simple. It highlights (underlines) your use statements, which are not used. Typical use case is after some refactoring, when you forgot to remove some obsolete use statements. This hint warns you on them and allows you to remove them easily. Just click on the hint bulb in the gutter and select Remove Unused Use Statement. And of course, it works in multiple use statements combined too. Immutable Variables The next one is the hint which checks too many assignments into a variable. And why? That's simple. Mostly you should use just one assignment into one variable. But sometimes you are lazy and you do something like: But it's quite wrong, because what you really do is: And that's exactly the case, when our new hint warns you, that Too many assignments (2) into variable $foo occured. Nothing more. Yes, we know that there are some cases, where could be more assignments and no warning should occur, e.g.: Because maybe one likes longer increment syntax more than the short one. So we tried to handle these cases to don't bother you if it's not a need. Note: We are almost sure that this hint doesn't cover all your use cases, because there are a lot of them. So if you find something strange, write it into our bugzilla so we can handle it better for you. Thanks for your patience! And the last thing is, that you can set the number of allowed assignments in Tools -> Options -> Editor -> Hints -> PHP: Immutable Variables. Note: This hint works just for a common variables, not for fields. We have an enhancement request for that and it should be implemented in next version of NetBeans (probably 7.3). And that's all for today and as usual, please test it and if you find something strange, don't hesitate to file a new issue (product php, component Editor). Thanks.

    Read the article

  • How to deal with malicious domain redirections?

    - by user359650
    It is possible for anybody to buy a domain name containing negative terms and point it to someone's website in order to damage their reputation. For instance someone could buy the domain child-pornography.com and point it to the address 64.34.119.12 which is the address behind stackoverflow.com and people navigating to the domain in question would end up visualizing content from StackExchange which would be detrimental to StackExchange's image. To illustrate this, I added the entry 64.34.119.12 child-pornography.com to my /etc/hosts file and tested. Here is what I obtained: I personally found this user experience terrible as someone could think that Stack Exchange are in favor of child pornography and awaiting support from the community to create a Q&A site about it. I tested with other websites and experienced other behaviors that I would categorize as follows: 1 - Useful 404 page (happens with stackoverflow.com): For me the worst way of handling this as the image of the targeted website is directly associated with the offending domain. The more useful the 404 page, the bigger the impression that the targeted website would be willing to help with child pornography. 2 - Redirection (happens with microsoft.com): For instance when accessing child-pornography.com you get redirected to www.microsoft.com. It isn't as bad as above as the offending domain name never appears alongside the targeted website's content, but still bad in my opinion as it gives the impression the targeted website bought the offending domain and redirected it to their website to get more traffic. 3 - Server error (happens with lemonde.fr): You get an error from the webserver which page doesn't contain any content that can be associated with the targeted website (e.g. default Apache 404 page, completely blank page). I believe that is good as the identify of the targeted website isn't revealed. Above are the various behaviors I experienced, but I also thought about a fourth way of dealing with this which is described below. 4 - Disclaimer page (haven't found any website implementing that technique): Display a message such as : "You ended here because someone bought and linked the child-pornography.com domain to our website. We do not own this domain and do not associate ourselves with it. This request has been logged by our servers and we will raise this issue with the competent authorities to have this domain taken down. If you want to access our website, please click here." The good thing about this method is that it can be implemented at application layer (good if you don't have control over web server which happens with some hosting solutions), allows you to protect yourself from any liability, and offer the visitor to be redirected to your own website. Which of the above options would you implement to deal with malicious domain linking (IMO only options 3 and 4 are worth considering) ?

    Read the article

< Previous Page | 854 855 856 857 858 859 860 861 862 863 864 865  | Next Page >