Property on myhome.ie listed at 642 million euro, etc.

What’s with these properties on myhome.ie claiming to be in Dublin and London and costing tens of millions of euro? Is this an attempt to skew myhome.ie prices, do you think? One house is priced at over 642 million euro? Choose Dublin county and you get 528 pages with 10 properties per page, even if all the other properties were free, the average price would still be over 120k.

myhome.ie/residential/brochu … pb/1964471

Worth every bit of the commission !

(As it’s listed with a real agency, I can’t figure out if this is a joke ad or not)

What’s it worth?

Big industrial size kitchen for those dinner parties :slight_smile:

That price puts it at c. €34,000 per square foot (c. €365,000 per square metre). Considering the most expensive street in the world commands approx $80k/m2, thats a long way off €365k/m2. I would imagine it is a mistake, it should more than like €64.2m.

i don’t think it s a mistake cause as o.p said there’s a number of really expensive London properties listed as having Dublin addresses. i know it seems like a bit of conspiracy theory but it does seem like an attempt (albeit a rather ham-fisted one) to alter the perception of current trends in asking prices in Dublin or have i been watching too many X-files re-runs!!!

I assumed the €642million property was in SOUTH COUNTY DUBLIN :stuck_out_tongue:

I found this amusing

Maybe its a FIRS SALE BOOM BOOM

Maybe it is some weird software error but it looks like a deliberate attempt to obscure price information? Or are we paranoid? I do not know. The adverts themselves are crappy junk full of typos as if the agency did not care about getting it right it just wanted to clog up the system for some reason?

As far as attempts to skew market perceptions go, this appears to have gained traction – there are no qualifying statistics in this Irish Independent article from July 2:

independent.ie/national-news … 54813.html

"Property website MyHome.ie has also backed up the stabilisation theory, with its figures showing prices are dropping at a more moderate rate.

The average asking price of a three-bed semi-detached house remains unchanged nationally over the past three months at €185,000."

It would be nice if the reporting included some deeper specification of the quality control on the data used and statistics drawn from the data.

Not to mention RonanL from Daft coming out with the “heat maps” that show asking prices, presumably including these gems.

It’s not the heat map I’d be worried about, it’s the calculation of the average price.
Something on this scale could have a definate impact on final figures.

You might not have read the below response from Ronan Lyons to similar comments on Irish Economy.ie

irisheconomy.ie/index.php/2012/07/02/were-different-roysh-the-decoupling-of-the-dublin-property-market/#comments

Indeed I didn’t. And Ronan doesn’t explain how the outliers are eliminated. I believe there’s a link to the methodology somewhere and that may explain it, but I’m pretty convinced the whole thing is a PR exercise anyway so I’m not going to spend a whole lot of time on it.

Ah here now - evilal, and after we’d come so far in the other thread! And then after all that, you’re not even prepared to read the methodology and, despite my specifically telling you that this was an academic project (a collaboration between NUIM and Oxford) that we managed to convince Daft to put up online, you try to write the whole thing off as a PR exercise!

It’s one thing to disagree in good faith, it’s another thing just being plain rude.

Fair enough, I’ll read the methodology paper.

I’m still reading the methodology paper (all 60 pages of it) but there is no mention of excluding outliers in it that I can see after a skim. So I assume that the €600m property is included in the averages, unless Ronan can say differently.

This would be the €600m property that was not even in Daft sample to begin with?!

Perhaps I wasn’t clear in the other thread - exclusion of outliers happens in the quarterly Daft reports, where with sample sizes of the low tens of thousands, it’s possible although unlikely that they could have a material effect on the outcome of the index. It does not happen in the heatmaps where sample sizes in the high hundreds of thousands means that the matrix algebra that underpins the econometrics is even less likely to get tripped up by one or even dozens of randomly badly listed properties. (Note also that the sample included only those properties that we can locate geographically with a reasonable degree of precision and which have no ‘odd’ features.)

And if after reading up on the methodology you’re still inclined to assert that one outlier can trip up the model (note: without any good explanation as to why - this being academic research, I’m open to comments of a substantive nature), then I’m sorry, we’ll just have to leave it at that. Without wanting to be melodramatic, it’s akin to someone looking at the evidence for evolution, then turning around to a scientist and saying “But why do monkeys still exist?”

You seem very defensive here Ronan. But I gather from your post that I am correct that obviously high (or obviously low) prices are not excluded as outliers.

I have no opinion at this point on whether this is significant in terms of an effect on the heat maps, but you seem anxious to assert that it would not be. I do think it’s a further quality issue with the data set that you chose to use. More once I’ve read the rest of the doc…

I’m not defensive, certainly not to comments that engage with what has been done. It’s just incredibly tiresome when - again for want of a better analogy - I’m plodding along at level of analysis say 7 or 8 out of 10, hoping to get feedback and engagement on that sort of level or even higher, willing to engage with anyone at any level as long as it helps them move up the level of analysis and I end up having to deal time and over again with someone on level 1 who, despite being engaged and from what I can tell intelligent, refuses to budge.

Your point about quality has certainly been taken on board but, again I’m trying not be rude, it’s bloody obvious. It’s the very first starting point we overcame when we started the Daft Report 7+ years ago. These are list prices - but happily they are a very highly correlated and usually leading indicator of transaction prices. The richer nature of the dataset means that it offers something that a transaction dataset probably will never be able to do. So the question then is whether use of list prices affects the perceived spatial relativities in prices. As I’ve mentioned a few times on I think one other thread, that’s an interesting research question, one which I can’t wait to examine. (And to that end, the sooner the house price register comes out, the better.) But the evidence is that so far, this effect is at most second-order. All first-order effects appear to be well captured by list prices.

Wow, that’s bitchy. You really must be an academic :slight_smile:

Given that someone on the thread claimed you were correcting for obvious outliers, I don’t see why it puts me “on level 1” to question that. A simple “No, we’re not correcting for that” would have sufficed, rather than trying to paint someone who questions you a being merely at “level 1” compared you your lofty “level 7 or 8”. Do you perhaps have a heat map of analysis levels you’d care to share?

It does not seem “obvious” to me that the data is poor. If it was obvious, surely it would have been reported like that by, for example, the Journal, instead of them claiming this was dealing with real sale prices.

OK, let’s get into the meat of it. Here Ronan decides that the prices are not sufficiently bubbly, and so the dataset was altered to bubblify it more:

However, there is no information on how (or whether) duplicates of daft properties were removed from this (just because an EA doesn’t formally advertise on Daft doesn’t mean that the properties aren’t advertised there by the owner, or another agent).

There is also no data to suggest that the statement about Daft’s coverage is based in any fact. For example, does an analysis of the non-Daft properties vs the Daft properties actually show any statistically significant price difference? This is a simple test that was apparently not done.

It’s also not clear why only 2006-2007 non-daft listings are included. Why not post-bubble non-Daft prices? Is Ronan saying that Daft become more bubbly after 2007? Is there any data to back this up? It’s a pretty mystifying change to the data.

This appears to be a very odd adulteration of a data set. There’s no real analysis given to justify the decision to bung in these 34k extra listings, other than that they felt the top end of the market “could be” under-represented.

Let’s hear your Level 7 analysis here Ronan.