back to article Oracle's Ellison talks up 'ungodly speeds' of in-memory database. SAP: *Cough* Hana

Oracle headman Larry Ellison kicked off this year's OpenWorld conference on Sunday by touting the lightning-fast performance of the Oracle 12c database's new in-memory database caching option, and he brought along some brand-new hardware to prove his point. In-memory databases are nothing new, of course. SAP has long flaunted …

COMMENTS

This topic is closed for new posts.
  1. Duncan Macdonald

    Cheap hardware BUT

    Knowing Oracle the cost of the software licences (and compulsory maintenance) will inflate the bill to the point where it is many times the price of the competition.

    1. wikkity

      Re: Cheap hardware BUT

      As much as I dislike Oracle as a company, especially with what they've done with Suns legacy, there are times when Oracle is the only choice. I'd feel much better using PostgreSQL however it doesn't always cut it (including EDB).

      Likelihood is though, if you need Oracle licensing and maintenance is an acceptable cost, especially if the client expects Oracle.

      1. Anonymous Coward
        Anonymous Coward

        Re: Cheap hardware BUT

        Microsoft: *Cough* SQL Server 2014

        Is more like it...

        http://blogs.technet.com/b/dataplatforminsider/archive/2013/07/22/architectural-overview-of-sql-server-2014-s-in-memory-oltp-technology.aspx

        1. W. Anderson

          Re: Cheap hardware BUT

          Not surprisingly the commenter making reference to Microsoft SQL Server database offerings forgot to quote from Microsoft offical that "any worthwhile" in-Memory SQL Server2014 or later will be: "for a future SQL Server release."

          It never ceases to amaze that Microsoft minions make any attempt to associate their favourite company's technology to the latest software achievements and innovations from competitors which are in reality are an order of magnitude ahead of anthing from Redmond.

          1. Anonymous Coward
            Anonymous Coward

            Re: Cheap hardware BUT

            Sql Server 2012 already has in memory databases. In fact there are 2 different ways, 1 something similar to this or within analysis services and cubes. Check the article about vertipaq vs column store which discusses querying 4billion rows 1 sec and which of the 2 methods is better suited. This is current technology right now.

  2. Anonymous Coward
    Anonymous Coward

    Wait, what?

    "Using a sample database with a table containing roughly three billion rows WITH NO INDEXES"???

    So whatever the query was doing (any details anyone?) it was guaranteed to be doing a table scan of a disk-based table containing 3 billion rows? I'm not surprised the in-memory alternative of a carefully scripted demo beat it hands down.

    Now put a sensible index or two on that table (OK, so indexes can compromise write performance, but this is a demo so all bets are off) and repeat that query... I bet you won't see the performance difference being quite so marked.

    1. JasonLaw

      I'll take that bet

      The difference between 'designed for disk' tables and 'designed for RAM' tables is pretty significant. The only reason for having an index is to reduce the impact of having to read the spinny rusty stuff, which ain't a problem with RAM.

      Column store indexes (indices, if you prefer) have already made a remarkable difference to summarising very large datasets. In-Memory just take that to another level.

      1. BlueGreen

        Re: I'll take that bet

        > only reason for having an index is to reduce the impact of having to read the spinny rusty stuff, which ain't a problem with RAM

        garbage. Indexes can help lots for all-in-memory data too.

        > In-Memory just take that to another level.

        Permit me to rephrase what I understood that to mean: "putting data in a fast place makes it faster than having it in a slow place". I can't argue.

      2. YARR
        Thumb Down

        Re: I'll take that bet

        "The only reason for having an index is to reduce the impact of having to read the spinny rusty stuff, which ain't a problem with RAM."

        An index lookup is an O(log(n)) operation versus trawling through the entire database which is O(n) time. If a non-indexed query takes a couple of seconds in memory, then the same indexed query in-memory would be much faster, perhaps a few thousandths of a second.

        IMO this brute force approach to database querying does not bode well for the future. I can foresee a time, perhaps a decade or so away where this technology is so affordable that there is no longer an economic case for employing someone to carefully design, maintain or optimise your database. Then after that there may no longer be a need for a well-designed DBMS like Oracle at all.... Here's hoping we reach the end of Moore's Law, before then.

    2. Billl

      Re: Wait, what?

      The point is that you won't need indexes. or do you like managing your indexes?

    3. BlueGreen

      Re: Wait, what?

      Summat fishy here. "With the new in-memory option, it [3 billion rows] took less than a second".

      Let's call that a round second then. That's 1/3 of a nanosecond per row. On a 12-core machine, if I understand, each core of which could probably eat data as fast as it came in. Doesn't add up.

      Also the whole row/col thing sounds like a smokescreen. It's never that simple.

      (btw I have no experience with these types of machines)

  3. Aristotles slow and dimwitted horse

    Need some point of reference...

    Is it time to update El Reg's standard units of measure register as I'm struggling to understand what order of magnitude either larger or smaller than the speed of <insert something really fast here> that "ungodly" or conversely "godly", speed relates to.

    As a start, presumably in the El Reg canon of weights and measures we could to "ungodly speed" ascribe the more scientific sounding soubriquet of "the Ellison coefficient" - or something less favourable?

    1. bpfh
      Trollface

      Re: Need some point of reference...

      I agree. I vote for 1 Ellison being a base reference of IT speed, so his RAM database is actually 1.3 Kiloellisons faster than Ellison's base product.

      Talking about RAM's and El Reg's Standard Units of Measure, I did some rough calculations on the speed of a sheep in a vacuum over the summer hols... The SheepSecs needs to be refined!

      1. Anonymous Coward
        Anonymous Coward

        Re: Need some point of reference...

        Hope you weren't too rough on the sheep.,,

  4. Anonymous Coward
    Anonymous Coward

    HANA is absurd, this goes beyond it

    Please, give me a break, so you store in memory BOTH the columnar store and the row store? Disk is between 6 (sequential) to 100000 (random) times slower than memory, so just by using memory instead of disk you already get a nice speedup. At least in my book, having the row store in memory should give you enough. A pretty big RAM disk is not the same, but is the cheap man version of doing this.

    I cannot imagine business environments having a big database in need of more than a 10x-100x speed up. Someone wanting one of these is to enable new functionality, not to speed up an existing process. Because if you have a process that is 10x slower than it should be, you have a very different problem with your application QA control, not a performance one.

    Ah, and yes, on small question that is almost never answered by the HANA/Oracle guys when talking about in memory databases is ... how well do they scale? See, the upper physical RAM memory limit for a machine is way orders of magnitude than for a disk array. So you get this big RAM machine and start to happily use it and then hit its memory limit. Prepare your checkbook because the bill is going to surely look nice (and Oracle/SAP are not exactly know for their generosity with maintenance fees)

    Not to mention, oh, you know, that perhaps instead of having humungous applications that no one understands completely, that are "customized" by fast-turnaround "consultants" or "engineers" that have trouble even writing a single table select statement and have reams of copy and paste code, someone that actually knows what is going on could take a look at their existing bottlenecks and fix them for, say, 1000 times less than the price of one of these beasts?

    In memory databases for both SAP and Oracle are a means of trying to escape a hole they have dug by... digging deeper?

  5. Aristotles slow and dimwitted horse

    DOH!!!

    "I cannot imagine business environments having a big database in need of more than a 10x-100x speed up. Someone wanting one of these is to enable new functionality, not to speed up an existing process. Because if you have a process that is 10x slower than it should be, you have a very different problem with your application QA control, not a performance one."

    This above statement is so wrong, and so generalised as to be completely and utterly moronic. I'm not sure what the sum total of your enterprise or datacentre class IT experience is, but it seems limited.

    1. Billl
      Facepalm

      Re: DOH!!!

      Thanks for this comment. I was thinking up my response to this posters nonsense, but I think your total disregard, and demonstrated disdain, is probably the correct route to take. You've saved me at least 2 minutes.

      1. Blane Bramble
        Happy

        Re: DOH!!!

        > You've saved me at least 2 minutes.

        Is that a 100x or 1000x speed up?

      2. Anonymous Coward
        Anonymous Coward

        Re: DOH!!!

        Probably four Ellisons or 100 startled sheep...

    2. Anonymous Coward
      Anonymous Coward

      Re: DOH!!!

      "I'm not sure what the sum total of your enterprise or datacentre class IT experience is, but it seems limited."

      Let's see. My experience is "only" 15 years of dealing with SAP (with Oracle as the underlying DBMS, nobody in his right mind would use anything else) and stand alone Oracle instances used mostly as back ends for CRM and Data Warehousing solutions, some of them bespoke, some of them market standards. My enterprise/datacentre experience goes around being part of a team that manages about 1200 servers scattered around the world, mostly Wintel boxes with about 30% of them running in VMWare hosts.

      After all those years I've seen, and fixed, quite a few examples of badly managed databases and systems. So many that over time I've made a few good bucks getting side consulting jobs on performance tuning projects for many customers around the world using a variety of environments, from stock brokerage to content recommendation systems, using anything from clustered MySQL instances to stand alone Oracle and Postgres instances.

      I stand by my affirmation, backed by experience, that anyone in a need of a 10x-100x speed increase in its *existing* systems has a much bigger problem that cannot be solved by a HW upgrade. Because my experience -and that's what I've been paid for- has been to get these levels of performance increases without replacing a single nut and bolt. And I've managed to do it by the simple process of determining where the bottlenecks were, and applying common sense and knowledge about how they work, and fixing them.

      Mind you, the typical level of "enterprise" optimization around these kind of problems is usually to add indexes, which if correctly applied can give you some performance improvements. But indexing is, again in my experience, only a marginal benefit and if applied as a checklist performance optimization technique, even dangerous. So no, I'm not one of these guys that say "just add indexes" to a performance problem and walks away, which is usually the deepest level of thought given by "enterprise IT" bods to these problems.

      Yes, there is obviously a limit. After you get the first speedup, further changes are unlikely to get similar levels of performance gains. If they were, one would be able to repeat this ad infinitum and get infinite performance, something that is not possible. Successive performance tuning steps get increasingly smaller gains to the point of diminishing returns. At that point you simply give up tuning because it's cheaper to just get faster hardware than the incremental benefit you get in each step. But unless you work with the likes of really top people (and no, your big name/big fee consulting guys do not qualify, at least in my experience), I've to say that is likely you the one that is lacking the experience and field knowledge to declare this statement as moronic. Or perhaps is that you're trying to protect your "enterprise level" position?

      Because time after time, according to my experience, I've seen so incredible gross levels of incompetence in architecture and coding (Java, C, C++, C#, BASIS and SQL code) from "enterprise level" people that it is sometimes embarrassingly easy to tune things.

      This is probably part of the problem. The emperor is naked, without anyone around able or willing to tell him so. Just because you, or someone around you has declared that the only way of speeding things up is to get faster HW does not mean that he/she right. In fact, he/she is likely wrong by a 10x-100x order of magnitude.

      So exactly what is your "what the sum total of your enterprise or datacentre class IT experience is" that qualifies you to say that this statement is moronic, please? Or is it perhaps that you work in an environment of Tom Duff/D.Knuth/K. Thompson/Torvalds level of enterprise IT? I'm not on that level, certainly, if that's the case, can I send you a CV?

      1. Billl

        Re: DOH!!!

        I think the point is that you don't seem to see the need for 100-1000x speedup. That seems very naive for someone that has been in the industry as long as you state. I've seen queries take longer than 48hrs. I've seen queries that take 2 hours, that are unusable because the data that made it up is out of date already. If you could make that 2 hour query take only 2 minutes, then just imagine all of the decisions you could make that previously you were unable to substantiate?

        The point is that this tech could provide for new uses of existing data that you have not thought of before. Currently you may limit your data size, or query, to accommodate a faster response. With this technology you may not have to do that. Currently, to enable decision support you may have to cache your data in an Exalytics (TimesTen) or squeeze your data into a HANA, but with this technology, you may not have to.

        If you work in this industry, try not to limit yourself by what you currently do. New tech implies new opportunities.

        1. Anonymous Coward
          Anonymous Coward

          Re: DOH!!!

          "The point is that this tech could provide for new uses of existing data"

          ahhh.. now I see the problem, you did not read my post and let your aggressive tendencies go. See, I said " Someone wanting one of these is to enable new functionality, *not to speed up an existing process*" Get it now?

          There is a big difference in saying "we're going to need incredible performance if we want to do this new X thing" than saying "we're doing X now in 100 hours but it would be best if we could do it in 1 hour" I maintain that accepting endemic poor performance levels is a problem that cannot be resolved by spending more money in HW/SW. Not saying that everything should be tuned to the last drop of performance, because that is not cost effective, only the 10% of the code that takes 90% of the time.

          If you are running a business process and accepting 100 times less the performance it should have, you're doing it doubly wrong. First because as a business user you're straitjacketing your business to some artificial imposed limitation and not really communicating your requirements to the technical people in charge. Second wrong is in technical management which is accepting face value statements about generic performance levels without benchmarking or actually going any deeper than saying "buy more hardware" to a problem.

          Not all situations are like that. I've seen places where it was actually the case that the HW and SW were already delivering all they could and then a bit more. But the average business setting is usually in the low rankings on this. And ironically, the more "enterprise" managed they are, the more infra utilized their HW and SW is. Not to mention the heaps of shovelware that populate "enterprise IT"

          "I've seen queries that take 2 hours, that are unusable because the data that made it up is out of date already. If you could make that 2 hour query take only 2 minutes"

          And that's exactly my point. I've seen -and done- jobs previously taking 18 hours executing in 2 minutes. And 24 hour jobs performed in 6 minutes. And 2 minute jobs resolved in 2 seconds. All without changing a single wire, or touching more than 1% of code/data structure. Seems to me that you need help from some professional tuning services. Unless someone skilled in the art/craft that is performance tuning has already done it, you'll be surprised at how can performance be improved.

          Don't believe me? Try to crowdsource the challenge, set up a prize for the best and see how you can get better utilization of what you have already paid for. Only remember, you need skilled resources, don't pretend to spend 200K on hardware and then having someone on minimum wage build and maintain your application and machines and get the best performance out of it.

          By no means closing my eyes to new technologies and opportunities, especially the ones afforded by new HW and SW developments. By all means against wasting your hardware, software and time from end users because someone has set an arbitrary bar on how "enterprise" experienced and skilled is at something and declared that unless purchasing some big iron things can't get any better.

          Now where the moronity and lack of enterprise experience is? Seems to me that your reaction is just a defensive reflex?

          1. This post has been deleted by its author

          2. Billl
            Holmes

            Re: DOH!!!

            " let your aggressive tendencies go."

            "Now where the moronity and lack of enterprise experience is? Seems to me that your reaction is just a defensive reflex?"

            I think you mistake me for someone else. I never questioned your experience, just your conclusions. You can discern my comments based upon my handle. I have not been anonymous here, so it shouldn't be too difficult.

            From your most recent comments I can only conclude that your point is that existing performance problems are generally not SW/HW related, but poorly tuned queries; Therefore, you seem to imply that many companies will pay huge sums of money for a product like this when they could have just paid someone like you to tune their environment (likely saving millions). I agree with that conclusion, if that is indeed what you are saying.

            My problem with your comments is that I am aware of many many situations where existing environments are just hitting the wall on what they can do. They've tuned the crap out of it. They have so many indexes that they just can't possibly keep up with them all. Sure there are lazy DBA's that will just take their poor admin tendencies and move them over to faster HW/SW and call it good. So? That's been going on since the beginning of computing, and will likely continue. So I must ask, now that I think I get your point, why make the point in the first place?

    3. Anonymous Coward
      Anonymous Coward

      Re: DOH!!!

      "what the sum total of your enterprise or datacentre class IT experience is, but it seems limited"

      and your is?

  6. Anonymous Coward
    Anonymous Coward

    Ellison found the hot water again...

    1. In-memory is at least a 10 years old technology (i.e. IBM, Sybase-SAP already have it since at least 10 years)

    2. Column stored DB technology is at least 20 years old (Vertica, Greenplum-EMC, Sybase-SAP etc...)

    Storing the same DB both in row-based and column-based at the same time is a waste of resources:

    - double amount of "cheap" HW is needed

    - increase the complexity of the system

    so in my opinion its only a well-designed oracle solution by the marketing division to follow the trends

    1. Billl

      Re: Ellison found the hot water again...

      To claim that Oracle is late to the game on in-memory is nonsense. Oracle has had TimesTen since 2005. TimesTen has been around (on it's own) since 1996. It was actually created by HP. How long has Hana been around? Oh, there it is... 2010!!!

      Which IBM and Sybase-SAP in-memory tech has been around for at least 10 years?

      Hana has been around since 2010, that's only 3 years. IBM didn't put in-memory into DB2 until 2013, so you can't claim they've had it longer either. As far as I know, if you want in-memory from SAP you can't use Sybase, you have to use an unproven Hana technology. IBM's in-memory tech "solidDB" was released in 2008, so you can't mean that...

      Your comment is obviously ill conceived. Can you please clarify your comment?

      1. tothttt

        Re: Ellison found the hot water again...

        just to clear some points:

        until Oracle bought SUN (in late 2009), they only tried to convince their customers (and themselves) that TimesTen will be integrated into OraDB v10x, then it slipped to v11x, but obviously they fail. After the SUN acquisition it was clear that TimesTen will be a separate product bundled with a SUN hw. TimesTen before 2008 was only be able to use table cache methods and being so, it wasn't a real in-memory DB .

        Anyway, TimesTen is a completely different product and uses a completely different engine than Oracle's mainstream database engines (10,11,12...).

        SAP (well, it was Sybase before SAP bought them in 2010) announced ASE v15.1 in-memory option in 2006 which is the completely same product as their mainstream database. in ASE 15.1 you can simply select device:memory instead of device:disk in case you have enough RAM in your server.

        Solid Technology was acquired by IBM in late 2007 but they offered their SolidDB v3 (which is a true in-memory DB) since 2003, and as of today have already over 3million deployments.

        1. Ramazan

          Re: Anyway, TimesTen is a completely different product

          Do they use different SQL syntax. Or TimesTen doesn't support PL/SQL?

          1. Billl

            Re: Anyway, TimesTen is a completely different product

            I'm not sure what he's saying here anyway. SAP uses HANA and Sybase, two separate products. IBM uses DB2 and solidDB. TimesTen is separate from Oracle DB, but now they're moving some of those features into Oracle DB. So?

        2. Billl

          Re: Ellison found the hot water again...

          None of this is for more than 10 years. The point was to find out what tech IBM and SAP have been doing for more than 10 years. SoldDB, as you state was released in 2003, and Sybase started in 2006. At least implies... well, at least 10 years. You've demonstrated only "at most".

          If selecting "device:memory" is the same as in-memory, then why does SAP have a completely different product in HANA? You are duplicitous in your comments. Oracle must have two separate products, but it's okay for SAP to have two separate products? or is that not what you meant?

          Also, to say that TimesTen is not "truly" in-memory seems like a dodge to me. Many smarter people than me, and I guess you, seem to think TimesTen is a "true" in-memory DB (actually, they say it is hybrid, but so are all of the options you mention).

          I would be careful of IBM's use of the term "deployments". That term seems to leave out the fact that many/most are not in production, and many/most are not even paid for. I'm not saying that's what IBM is doing here, but... So, seeing how TimesTen has been around so much longer, and it has better integration with the most popular Database in the world, I think I'll stick with TimesTen/Exalytics.

  7. Kebabbert

    Big memory

    Three of the new 32 socket Oracle M6 server will be able to connect with Bixby interconnect, into a huge 96-socket M6 server with 96TB of RAM, and 9.216 threads. If you run your database from 96TB of RAM, and also compress the data - it will be very fast. I doubt SAP Hana can compete with such a huge server. How much RAM can Hana utilize? Can Hana go higher than 96TB of RAM? I doubt that. Anyone knows?

    1. Kiralexi

      Re: Big memory

      Bixby is an internal interconnect ASIC. While Bixby itself scales to 96 sockets, this isn't a NUMAlink-type system and a customer can't just plug multiple M6-32 systems into each other with it.

      That being said, I expect M6-64 and M6-96 will be coming within the next few months.

      1. Kebabbert

        Re: Big memory

        @Kiralexi,

        Ok, I did not know that. How do you know that? Do you have more information? I mean, Bixby builds a huge 96-socket server from building blocks, and the building blocks are the M6-32 server. So, I thought you could use several M6-32 servers to build a M6-96? But this is wrong? What link have you read to learn more?

        1. Kiralexi

          Re: Big memory

          Oracle has never claimed that M6-32's can be linked that way, and neither their HotChips presentation nor their M6 marketing materials claim that it can. "Bixby supports 96 sockets" doesn't translate to "you can glue together arbitrary Bixby servers to make a 96-socket machine" any more than "Boxboro-MC supports 8 sockets" means you can put 4 rx2800's into a single system.

          1. Kebabbert

            Re: Big memory

            @Kiralexi,

            Ok, you mean that If Bixby supports 96-sockets, then you can not use ordinary M6-32 servers, you must use modified M6-32 servers? Maybe you need to insert another card into each M6-32 server? Is that it?

            Are you implying that for Bixby to connect 96 sockets, you can not use three of the normal M6-32 servers, but you need to use another type of server that has not been announced yet? I doubt that, because these large servers are expensive to make. It would be more economical to allow three of the M6-32 servers to connect via some extra hardware, using Bixby. But you dont agree with my guess? You mean there is another type of server coming? Do you have information on this, or is it your guess?

  8. This post has been deleted by its author

  9. Anonymous Coward
    Anonymous Coward

    What about his boat race?

    Never mind this crap. Bloke knows how to run a boat race - NZL 8 : 5 USA, two more races tonight with USS Larry (under the tactical guidance of Sir Ben Ainslie) on fire so it could go to the wire.

    (Americas Cup for avoidance of doubt)

    1. asdf

      Re: What about his boat race?

      Yeah San Fran is p_ssed Larry lied to them about domestic interest in the race. Since the recession %1er sports just don't draw like they used too. The only people paying attention to the race for most part are the NZers.

  10. Mr. Peterson

    Today, Lanai. Tomorrow, Borneo!

  11. asdf

    deal with the devil

    I am sure the technology is amazing because the devil's offer always looks amazing at first. Not talking morally so much as the cut of your future revenue.

  12. Anonymous Coward
    Anonymous Coward

    Just Flick The Switch

    So, complex queries that currently read row based data from disk will use the same query plan when accessing the same data stored as columns in RAM?

    If so, epic fail. If not, hardly true that nothing changes is it?

    I wonder how much extra all this goodness costs.

  13. Fenton

    Adabas/MaxDB/SAPdb

    Well SAP has for quite some time (around 2000), had an in memory database that they used for their supply chain management solution (SCM), they basically put what was Adabas into memory.

    Now if you can speed up certain queries 1000 times, it does not mean an existing business process was broken, but it does give you an opportunity to change your business process to take advantage of the speed up.

    example. Your MRP process to replenish your store, probably ran over night and you sent out a truck the next day to replenish the store. Which meant that certain products might flow off the shelf in the morning with nothing left until the next day.

    1000x speed up, means you can replenish certain goods before they even run out.

    1. Anonymous Coward
      Anonymous Coward

      Re: Adabas/MaxDB/SAPdb

      Not a very good example, I have to say. First, you cannot replenish a store faster than the time the goods take to ship from warehouse to store, unless you ship smaller and smaller amounts. In the extreme, you send a single unit to the store each time a unit is sold. At some point, you'll spend more on shipping than the additional profit made from being always stocked (unless you work in Wall Street and your goods travel close to the speed of light over a fiber wire and you can ship single units of them at the same cost as full truckloads, but don't think you were thinking about that example)

      So even if you were able to tell in real time how much replenishment you need, you'll not do it anyway below certain level. You only need to make it as fast as it is profitable to do it. Which is likely not two or even one order of magnitude faster.

      A better example would be that you could use the extra machine time to do other things that you don't do today. But if we're really talking about 100x times the performance, do you have 100 times more thing to do with your ERP/CRM/etc than you do today? You could say that it enables you to grow your business 100x without needing additional capacity, but nobody is his right business sense will sign off a purchase of these things really truly believing their business is going to be 100 times larger in say, four years.

      The real value of these boxes is in mostly in analytical jobs, and for these kind of things the usefulness of storing row level data is questionable at least (in the general case, not always but...)

      Can I propose someone bookmarks this discussion and revisits it again in 5 years? It will be funny to see how many actual units Oracle has sold of these boxes, and how many of their customers are actually using these features to some benefit. Willing to bet that not that many.

      1. Anonymous Coward
        Anonymous Coward

        Re: Adabas/MaxDB/SAPdb

        An interesting windmill you are tilting at... "No one will need this much performance! It's too much!" Good luck in that quest.

        So, how much memory do you have in your personal computer? I bet you thought you'd never need more than 640K, huh?

        BTW, how many huge memory systems does Oracle have to sell for you to consider yourself wrong?

        P.S. I know Gates denies he ever said that no one would ever need more than 640K of memory... It fits my narrative, okay?

  14. This post has been deleted by its author

  15. This post has been deleted by its author

This topic is closed for new posts.

Other stories you might like